Exploring How and When Humans Outsource Moral Decisions to AI Agents.
As the use of artificial intelligence (AI) becomes more widespread, machine systems are increasingly required to display artificial morality as part of their capability. This means that machines are not just being tasked with processing ethically relevant information, but even being built to directly make morally relevant decisions: for example, in aiding decisions about life support, criminal sentencing, or the allocation of scarce medical resources. Leveraging the growing advancements in AI, such “moral machines” are even being posited to be able to act as “Artificial Moral Advisors” – giving moral advice and helping to improve human moral decision making.
“Multi-Dimensional Trust in Moral Machines” is an EPSRC-funded research project that brings together researchers from the University of Kent and Exeter University to use the latest theories and methods from moral psychology to understand when, why, and how we trust such moral machines, and whether such trust is even warranted in the first place.
The project is led by Dr Jim Everett, from the University’s School of Psychology, with support from Dr Edmond Awad from Exeter University. You can read more about the project, latest news and find out how to contact us for further information about the project.
Our thanks go to the Engineering and Physical Sciences Research Council (grant number EP/Y00440X/1) for supporting this work.