How, when, and why people do trust AI agents in the moral domain – and should they?
As use of AI becomes more widespread, machine systems are not just required to have artificial intelligence but artificial morality too. AI is already used to aid decisions about life support, criminal sentencing, or the allocation of scarce medical resources. Leveraging the growing advancements in AI, so-called “moral machines” are even being thought to be able to act as “artificial moral advisors” – giving moral advice and helping to improve human moral decision making.
Debates rage about the ethical issues of AI, how it should be programmed and what ethical values need prioritising in an AI machine. But understanding machine morality is as much about understanding human moral psychology as it is about the philosophical and practical issues of building artificial agents.
This five-year ERC/UKRI project – ‘Multidimensional Trust in Moral Machines’ – draws on psychology and philosophy to explore how and when humans trust AI agents that act as ‘moral machines’. We are exploring how trust in moral machines depends on the precise characteristics of characteristics of different AI agents; the individual differences of the person doing the trusting; and the situations where we are more likely to ‘outsource’ moral decisions to AI agents. By understanding this, we consider the ethical promise and pitfalls of such moral machines and how these findings should be used to design AI agents that actually warrant our trust.
Over the course of the project running from 2024-2029, we are conducting studies in multiple countries using a methodologically pluralistic approach that includes large online surveys, cross-cultural research, lab-based experiments, economic games, qualitative analysis, and natural language processing tools.
We are currently investigating questions like:
- What does the concept of trust mean when applied to AI?
- How much do we trust artificial moral advisors?
- When people listen to AI, how are they doing so, and how we can preserve epistemic agency?
- What kind of moral frameworks do we want moral machines to rely on?
- Do people think it’s more important that AI is ethical, or that is effective?
- How do view others who use AI to help them perform socio-relational tasks?
- How do intergroup dynamics shape our acceptance of AI?
We consider both the promise and pitfalls of AI in our work, seeking to draw broadly on interdisciplinary work to place our psychological work in the broader context. As well as drawing heavily on moral psychology and AI ethics, we are inspired by theories of cultural evolution, philosophical work on virtue ethics, sociological work on connective labour, and feminist philosophy to understand how we can shape ethical relations with one another in a technology-driven world. In our work, we seek to not only advance our understanding on the psychological and ethical promise and pitfalls of artificial moral advisors, to highlight not only the way that these technologies exist within a broader political and social landscape that shapes the way we think about AI.
The project is led by Dr Jim Everett, from the University’s School of Psychology, along with Dr Edmond Awad from the University of Exeter, and runs from 2024-2029.