Trust in Moral Machines

Featured story

The Project

How, when, and why people do trust AI agents in the moral domain – and should they?

This five-year ERC/UKRI project – ‘Multidimensional Trust in Moral Machines’ – draws on psychology and philosophy to explore how and when humans trust AI agents as ‘moral machines’. We are interested not only in how AI is used for morally consequential tasks or even to directly give moral advice, but in the many ways morality shapes how people think about and trust artificial intelligence. We explore how moral and political values shape how people think about AI, and the moral and political consequences of this trust.  We are motivated by the idea that trust in AI is, at its core, moral.

Over the course of the project running from 2024-2029, we are conducting studies in multiple countries using a methodologically pluralistic approach that includes large online surveys, cross-cultural research, lab-based experiments, economic games, qualitative analysis, and natural language processing tools.

We are currently investigating questions like:

  • What does the concept of trust even mean when applied to AI?
  • Do people think it’s more important that AI is ethical, or that is effective?
  • How much do we trust artificial moral advisors?
  • When people listen to AI, how are they doing so, and how we can preserve epistemic agency?
  • What kind of moral frameworks do we want moral machines to rely on?
  • How do view others who use AI to help them perform socio-relational tasks?
  • How do intergroup dynamics shape our acceptance of AI?
  • How do political values shape trust in AI, and what are the political consequences of trust in AI?

In our work, we seek to not only advance our understanding on the psychological and ethical promise and pitfalls of artificial moral advisors, to highlight not only the way that these technologies exist within a broader political and social landscape that shapes the way we think about AI. By linking the psychology of trust with debates in AI ethics, political theory, and moral philosophy, we aim to understand not only how people judge moral machines, but what kinds of relationships – and what kind of society – we want to cultivate as AI becomes more deeply woven into human life.

The project is led by Dr Jim Everett, from the University’s School of Psychology, along with Dr Edmond Awad from the University of Exeter, and runs from 2024-2029.