Exploring How and When Humans Outsource Moral Decisions to AI Agents.
As the use of artificial intelligence (AI) becomes more widespread, machine systems are increasingly required to display artificial morality as part of their capability. This means that machines are not just being tasked with processing ethically relevant information, but even being built to directly make morally relevant decisions: for example, in aiding decisions about life support, criminal sentencing, or the allocation of scarce medical resources. Leveraging the growing advancements in AI, such “moral machines” are even being posited to be able to act as “Artificial Moral Advisors” – giving moral advice and helping to improve human moral decision making.
“Multi-Dimensional Trust in Moral Machines” is a five-year research project that takes an interdisciplinary approach combining the methods of psychology with philosophy and applied ethics. We use the latest theories and methods from moral psychology to understand when, why, and how we trust such moral machines – and whether such trust is even warranted in the first place.
The project is led by Dr Jim Everett, from the University’s School of Psychology, with support from Dr Edmond Awad from Exeter University. This five year project was awarded as an ERC Starting Grant and is now a EPSRC-funded research project. You can read more about the project, latest news and find out how to contact us for further information about the project.
Our thanks go to the Engineering and Physical Sciences Research Council (grant number EP/Y00440X/1) for supporting this work, and to the European Research Council for awarding this grant as an ERC Starting Grant under Horizon Europe.