Multidimensional Trust in Moral Machines

Featured story

Multidimensional Trust in Moral Machines

Understanding how, when, and why people trust AI agents

Exploring How and When Humans Outsource Moral Decisions to AI Agents.

As the use of artificial intelligence (AI) becomes more widespread, machine systems are increasingly required to display artificial morality as part of their capability. This means that machines are not just being tasked with processing ethically relevant information, but even being built to directly make morally relevant decisions: for example, in aiding decisions about life support, criminal sentencing, or the allocation of scarce medical resources. Leveraging the growing advancements in AI, such “moral machines” are even being posited to be able to act as “Artificial Moral Advisors” – giving moral advice and helping to improve human moral decision making.

“Multi-Dimensional Trust in Moral Machines” is a five-year research project that takes an interdisciplinary approach combining the methods of psychology with philosophy and applied ethics. We use the latest theories and methods from moral psychology to understand when, why, and how we trust such moral machines – and whether such trust is even warranted in the first place.

We consider both the promise and pitfalls of AI in our work, seeking to draw broadly on interdisciplinary work to place our psychological work in the broader context. As well as drawing heavily on moral psychology and AI ethics, we are inspired by theories of cultural evolution, philosophical work on virtue ethics, sociological work on connective labour, and feminist philosophy on social relationships to understand how we can shape ethical relations with one another in a technology-driven world. In our work, we seek to not only advance our understanding on the psychological and ethical promise and pitfalls of artificial moral advisors, but to also highlight the way that these technologies exist within a broader political and social landscape that shapes the way we think about AI.

The project is led by Dr Jim Everett, from the University’s School of Psychology, with support from Dr Edmond Awad from Exeter University.  This five year project was awarded as an ERC Starting Grant and is now a EPSRC-funded research project. You can read more about the project,  latest news and find out how to contact us for further information about the project.

Our thanks go to the Engineering and Physical Sciences Research Council (grant number EP/Y00440X/1) for supporting this work, and to the European Research Council for awarding this grant as an ERC Starting Grant under Horizon Europe. 

 

Find out more