Multidimensional Trust in Moral Machines

Featured story

Multidimensional Trust in Moral Machines

Understanding how, when, and why people trust AI agents

Exploring How and When Humans Outsource Moral Decisions to AI Agents.

 

Can AI make moral decisions, and can AI help us make better moral decisions? Would people trust AI systems in the moral domain, and should they?  These are the questions that drive our research.

As use of artificial intelligence (AI) becomes more widespread, machine systems are not just required to have artificial intelligence but artificial morality too. AI is already used to aid decisions about life support, criminal sentencing, or the allocation of scarce medical resources. Leveraging the growing advancements in AI, so-called “moral machines” are even being thought to be able to act as “artificial moral advisors” – giving moral advice and helping to improve human moral decision making.

“Multi-Dimensional Trust in Moral Machines” is a five-year research project that seeks to understand how we think about AI being used in the moral domain. We take an interdisciplinary approach that combines the latest theories and methods from moral psychology with philosophy and AI ethics to understand when, why, and how we trust so-called “moral machines” and whether such trust is even warranted in the first place.

We consider both the promise and pitfalls of AI in our work, seeking to draw broadly on interdisciplinary work to place our psychological work in the broader context. As well as drawing heavily on moral psychology and AI ethics, we are inspired by theories of cultural evolution, philosophical work on virtue ethics, sociological work on connective labour, and feminist philosophy on social relationships to understand how we can shape ethical relations with one another in a technology-driven world. In our work, we seek to not only advance our understanding on the psychological and ethical promise and pitfalls of artificial moral advisors, but to also highlight the way that these technologies exist within a broader political and social landscape that shapes the way we think about AI.

The project is led by Dr Jim Everett, from the University’s School of Psychology, with support from Dr Edmond Awad from Exeter University.  This five year project was awarded as an ERC Starting Grant and is now a EPSRC-funded research project. You can read more about the project,  latest news and find out how to contact us for further information about the project.

Our thanks go to the Engineering and Physical Sciences Research Council (grant number EP/Y00440X/1) for supporting this work, and to the European Research Council for awarding this grant as an ERC Starting Grant under Horizon Europe. 

 

Find out more