As artificial intelligence increasingly makes decisions with moral weight (allocating medical resources, shaping social policy, advising individuals), a key question arises: when and why do people trust these so-called ‘moral machines’ – and what happens when they do?
The five-year ERC/UKRI project Trust in Moral Machines project (2024-2029) draws on psychology, philosophy and social science to explore this question. We do not simply ask whether people trust AI, but ask how moral and political values, individual differences, relational dynamics and institutional contexts shape trust, and how trust in turn reshapes human agency, social relationships and moral norms. We are motivated by the idea that trust in AI is, at its core, moral.
Our overarching research themes include:
What are the antecedents of trust in “moral machines”? What features of AI systems (transparency, moral framework, human oversight), what individual factors (moral values, political orientation, cultural background) and what situational contexts (domain, stakes, relational context) make trust more or less likely?
What are the consequences of trust in “moral machines”? What happens when people, organisations or institutions trust AI, especially for more social and morally consequential tasks? How are human moral agency, relational trust, institutional legitimacy, and social norms reshaped (for better or worse)?
How – if at all – can we use insights from moral psychology to design moral machines that are worthy of trust? Based on our findings, how should we design AI moral agents and organisational systems so that trust is warranted, human agency is protected, and moral outcomes are improved?
Over the course of the project running from 2024-2029, we are conducting studies in multiple countries using a methodologically pluralistic approach that includes large online surveys, cross-cultural research, lab-based experiments, economic games, qualitative analysis, and natural language processing tools.
We want our findings to help people understand how to navigate AI-mediated moral guidance, how organisations should adopt AI, what policymakers need to know when thinking about the use of AI, and how AI advancements shape how we do psychology By linking the psychology of trust with AI ethics, political theory and moral philosophy, we aim not only to understand how people judge moral machines, but critically ask: What kinds of relationships, and what kind of society, do we want to cultivate as AI becomes more deeply woven into human life?
The project is led by Dr Jim Everett, from the University’s School of Psychology, along with Dr Edmond Awad from the University of Exeter, and runs from 2024-2029.
