Multidimensional Trust in Moral Machines

Featured story

How, when, and why people do trust AI agents in the moral domain?

Artificial intelligence (AI) is increasingly being used to perform tasks with a moral dimension, such as prioritising scarce medical resources or to operate self-driving cars. To reap the benefits of these systems, we need to be able to trust in AI agents.

Debates rage about the ethical issues of AI, how it should be programmed and what ethical values need prioritising in an AI machine. But understanding machine morality is as much about understanding human moral psychology as it is about the philosophical and practical issues of building artificial agents.

This five-year ERC/UKRI project – ‘Multidimensional Trust in Moral Machines’ – draws on psychology and philosophy to explore how and when humans trust AI agents that act as ‘moral machines’. We are exploring how trust in moral machines depends on the precise characteristics of characteristics of different AI agents; the individual differences of the person doing the trusting; and the situations where we are more likely to ‘outsource’ moral decisions to AI agents. By understanding this, we consider the ethical promise and pitfalls of such moral machines and how these findings should be used to design AI agents that actually warrant our trust.

Over the course of the project running from 2024-2029, we will conduct studies in multiple countries using a methodologically pluralistic approach that includes large online surveys, cross-cultural research, lab-based experiments, economic games, qualitative analysis, and natural language processing tools.

These findings will help us understand how, when, and why people trust AI agents with important theoretical and methodological implications for research on the antecedents and consequences of trust in moral machines.

The project is led by Dr Jim Everett, from the University’s School of Psychology, along with Dr Edmond Awad from the University of Exeter, and runs from 2024-2029.