Trust in Moral Machines

Featured story

Consulting & Advisory Work

Artificial intelligence is transforming how organisations operate, make decisions, and interact with the public. Many organisations are already exploring how AI can increase efficiency or enhance performance. But understanding how these systems will actually be trusted requires thinking about the psychology behind how they are understood, trusted, and judged by the people who use them — and those affected by them.

This is where our expertise is distinctive.

While organisational discussions often focus on specific tools, and AI ethics discussions focus on technical standards or legal compliance, our work examines the behavioural science behind how people actually accept, trust, and morally evaluate AI. It is these human responses that shape reputational, operational, and ethical outcomes just as much as technical performance does.

As Project Lead, Jim is available for selective consulting and advisory work that brings these research insights directly to organisations. Drawing on evidence from this large-scale research programme, Jim offers guidance that helps organisations anticipate how their AI systems will be perceived, understand psychological risks, and design technologies that align with human values and expectations.
Alongside his academic responsibilities, Jim take on a small number of consulting and advisory engagements each year and is happy to discuss whether his expertise may be a good fit for your needs.

What we can help with:

We welcome enquiries from researchers interested in applying for postdoctoral fellowships to work with us. Potential schemes include:

  • Psychological Drivers of AI Trust: Clear, evidence-based insight into how people perceive AI decisions, attribute motives, and decide whether to trust or reject AI systems.
  • Moral Psychological Risk Assessment: Identifying the moral intuitions, fairness expectations, and ethical concerns that shape public and stakeholder acceptance — before they become reputational or regulatory problems.
  • Human–AI Interaction Testing: Designing or advising on studies to evaluate user expectations, perceived fairness, discomfort, or trust breakdowns in specific AI products or decisions.
  • Ethical & Responsible Innovation Guidance: Support in integrating transparency, accountability, and ethical reasoning into design and communication strategies, grounded in how users actually reason morally.
  • Training & Workshops: Tailored sessions that introduce teams to the psychological and ethical foundations of trustworthy AI, with practical tools for anticipating human responses.
  • Media, Interviews, and Public Communication: Clear, accessible commentary on the psychology and ethics of AI for news media, podcasts, documentaries, and public outreach, grounded in current research.

Jim A.C. Everett

Dr Jim A.C. Everett is the Project Lead, combining psychological behavioural science with applied ethics to study how people make moral judgments about both humans and machines. His work examines how perceptions of morality and trust shape the adoption, regulation, and governance of AI, and how psychological insights can support responsible innovation.

Jim is a multi-award-winning psychologist with degrees in psychology and philosophy. He holds a PhD from the University of Oxford and has completed research fellowships at Yale University and Harvard University. He currently leads the 5-year Trust in Moral Machines project at the University of Kent, where his team investigates trust in AI as a fundamentally moral phenomenon.