Wednesday 8th June 2022
12:00 – 13:45 UK summer time (UTC+1)
A group of experts from multiple disciplines and different sectors will discuss long-term impacts of cyber security and AI ethics, and predict how the future human society will look like in 2030, depending on how well or how badly we address today’s socio-technical challenges.
The roundtable discussion will be held as part of Anthropology, AI and the Future of Human Society, a virtual conference co-organised by the Royal Anthropological Institute (RAI), British Science Fiction Association (BSFA) and Future Anthropologies Network (FAN). To attend the roundtable discussion and other parts of the Conference, please register here.
The panel discussion will gather a group of experts from different disciplines (computer science, engineering and physical sciences, anthropology, sociology, arts and humanities), and different sectors (academia, industry and public sector) who have research interests in cyber security and AI ethics to discuss the evolution of future human society depending on how we address today’s socio-technical challenges on cyber security and AI ethics. The panel’s discussants will be asked to express their views on two aspects of the relationship between cyber security and AI, i.e., AI for cyber security and security of AI, and impact of malicious use and misuse of AI. They will also be asked to envisage the future human society in 2030 depending on how we address a broad range of challenges around AI ethics, e.g., moral and legal responsibility, social justice, transparency, equality, fairness, job security, and economic growth. The discussion will be contextualised using emerging new trends of cyber security and AI research and innovation, such as deepfake, adversarial AI, deceptive and misuse of AI, and increasing use of AI and cyber security techniques for military purposes. Discussants will be asked to comment on current movements on making AI and cyber security more ethical, and what can be done to address those research ethics-related challenges. The panel aims at inspiring the audience to think about long-term impacts of cyber security and AI for future human society, and to stimulate more cross-disciplinary and cross-sectoral collaborations.
Lorenzo Cavallaro grew up on pizza, spaghetti, and Phrack, first. Underground and academic research interests followed shortly thereafter. He is a Full Professor of Computer Science at UCL Computer Science, where he leads the Systems Security Research Lab (https://s2lab.cs.ucl.ac.uk) within the Information Security Research Group. He speaks, publishes at, and sits on the technical program committees of top-tier and well-known international conferences including IEEE S&P, USENIX Security, ACM CCS, ACSAC, and DIMVA, as well as emerging thematic workshops (e.g., Deep Learning for Security at IEEE S&P, and AISec at ACM CCS), and received the USENIX WOOT Best Paper Award in 2017. Lorenzo is Program Co-Chair of Deep Learning and Security 2021-22, DIMVA 2021-22, and he was Program Co-Chair of ACM EuroSec 2019-20 and General Co-Chair of ACM CCS 2019. He holds a PhD in Computer Science from the University of Milan (2008), held Post-Doctoral and Visiting Scholar positions at Vrije Universiteit Amsterdam (2010-2011), UC Santa Barbara (2008-2009), and Stony Brook University (2006-2008), worked in the Department of Informatics at King’s College London (2018-2021), where he held the Chair in Cybersecurity (Systems Security), and the Information Security Group at Royal Holloway, University of London (2012-2018). He’s definitely never stopped wondering and having fun throughout.
Dr Sara Degli-Esposti is permanent Research Scientist in AI Ethics at the Institute of Philosophy (IFS) of the Spanish National Research Council (CSIC) in Madrid (Spain), and also Honorary Research Fellow in the Centre for Business in Society (CBiS), Coventry University (UK). She has both academic and professional experience in data protection compliance, cybersecurity economics, and in the assessment of the ethical and social implications of digital technologies. Her research sheds light on public trust in law enforcement agencies using digital surveillance, algorithmic governance and accountability, and people’s vulnerability to misinformation. She is the Research Director of H2020 TRESCA project (GA 872855) and the Ethics Advisor of H2020 NEST project (GA 101018596).
Dr Amy McLennan works at the intersections of technology, society, food and health. She is currently a Senior Fellow at the ANU School of Cybernetics, and is affiliated with the University of Oxford’s School of Anthropology and UniSA Creative. Amy is trained in medical science, anthropology and cybernetics, and her research and teaching focus on cross-disciplinary and cross-sectoral topics, including food systems, non-communicable diseases and artificial intelligence. She has professional experience in workshop design, facilitation and government policy making, including several years based in Australia’s Department of the Prime Minister and Cabinet, where her work included projects relating to cyber resilience, women’s safety, technology procurement, cybercrime, and the innovation ecosystem. More about her can be found on Twitter (@amykmcl), LinkedIn (https://www.linkedin.com/in/amykmcl/) and ORCID (https://orcid.org/0000-0003-2362-6324).
Dr Patrick Scolyer-Gray is a cyber-sociologist who investigates what, how and why people think and do what they do. By deploying a mixture of methods, concepts and theories drawn from both behavioural and physical sciences, Dr Scolyer-Gray identifies the security implications of human behaviour and cognitive processes to develop solutions to the vulnerabilities and threats he finds. Leveraging his broad range of experiences engaging with various industries and government organisations – particularly the Australian Department of Defence – Patrick works to ensure the development of robust security architecture that outperforms conventional techno-centric cybersecurity solutions. Patrick takes particular satisfaction in applying his skills for the purposes of eliciting previously obscured elements of the problems faced by clients, which he then works to resolve by taking a collaborative approach to defining and implementing solutions. At present, Dr Scolyer-Gray leads the Human-Centric Cybersecurity (HCCS) consulting practice at the Expert Management Agency (EMA) 460degrees.
Co-organised by the following units of the University of Kent, UK: