Trust in Moral Machines

Featured story

Trust in "Moral" Machines

Trust in AI is a moral phenomenon

Artificial intelligence is being used to help with things that matter, not making intelligent decisions but making moral ones. But what does it mean to trust AI when the stakes are moral? This is the question that drives our research.

We study “moral machines” in the broadest sense. We are interested not only in AI used for morally consequential or even to directly give moral advice, but in the many ways morality shapes how people think about and trust artificial intelligence, and the moral consequences of doing so. We are motivated by the idea that trust in AI is, at its core, a moral phenomenon.

Our interests in trust in AI as a moral phenomenon are broad-ranging, encompassing:

  • How morality shapes trust in AI. People do not just care about an AI system’s performance – what it can do – but also care about its morality – how it does it, and why. In our research, we think about the broad range of AI being used around the world and think about how perceptions of AI ethicality shape trust alongside performance.
  • How people trust AI in moral contexts. From healthcare triage to policy setting to “artificial moral advisors” we investigate how people trust AI that is designed specifically to provide moral recommendations.
  • How using AI shapes perceptions of moral character. Using or trusting AI can change how others judge our moral character. We examine how people make inferences about the human user’s competence and morality from their use of AI, especially in socio-relational tasks
  • How trust in AI is not morally neutral. Trust never happens in a vacuum, and technological is not value-neutral. We analyse how political values, cultural narratives, power structures, and visions of the future shape how societies come to trust or mistrust moral machines.

In all the work over this 5-year project, our approach blends empirical psychology with ethical analysis. We combine large-scale surveys, behavioural experiments, and cross-cultural methods with conceptual and normative work that interrogates the moral assumptions built into the design, use, and governance of AI. Beyond describing how people trust AI, we examine how trust discourse shapes responsibility, power, and the ethical landscapes in which AI operates. By linking the psychology of trust with debates in AI ethics, political theory, and moral philosophy, we aim to understand not only how people judge moral machines, but what kinds of relationships – and what kind of society – we want to cultivate as AI becomes more deeply woven into human life.

The project is led by Dr Jim Everett, from the University’s School of Psychology, with support from Dr Edmond Awad from Exeter University.  This five year project was awarded as an ERC Starting Grant and is now a EPSRC-funded research project. You can read more about the project,  latest news and find out how to contact us for further information about the project. Our thanks go to the Engineering and Physical Sciences Research Council (grant number EP/Y00440X/1) for supporting this work, and to the European Research Council for awarding this grant as an ERC Starting Grant under Horizon Europe.