AI & End-of-Life Care: Human Judgment Preferred
AI Faces Skepticism in End-of-Life Decisions, Favoring Human Judgment
Updated May 27, 2025

A recent international study, spearheaded by the University of Turku, Finland, reveals that individuals are less likely to accept euthanasia decisions when made by artificial intelligence or robots compared to human doctors. The research, published in Cognition, explores moral judgments concerning end-of-life care decisions for patients in comas.
Researchers in Finland, Czechia, and great Britain presented participants with medical case scenarios. The study highlights the “Human–Robot moral judgment asymmetry effect,” where AI and robots are held to a higher standard than humans, according to University Lecturer Michael Laakasuo, the project’s principal investigator.
Laakasuo said that pinpointing the specific decisions and situations where this asymmetry emerges remains a scientific puzzle. The team investigated various situational factors influencing the acceptance of moral decisions involving AI in health care.
Perceived Competence Affects acceptance of AI Euthanasia Decisions
The study found that participants where less accepting of euthanasia decisions made by AI or robots,nonetheless of weather the machine played an advisory role or acted as the primary decision-maker. Tho, no such judgment asymmetry arose when the decision involved maintaining life support.
Interestingly, participants generally favored decisions to turn off life support. the discrepancy in acceptance disappeared when patients were awake and explicitly requested euthanasia, such as lethal injection.
The research also suggests that this moral judgment asymmetry stems, in part, from the perception that AI systems are less competent decision-makers than their human counterparts.
Laakasuo noted that AI’s perceived inability to adequately explain and justify its decisions contributes to the reluctance in accepting AI in clinical roles.
Experiences with AI Influence Decision-Making Acceptance
Laakasuo emphasized the importance of patient autonomy in the application of AI in health care. The findings underscore the intricate nature of moral judgments when considering AI decision-making in medical contexts.
Laakasuo said that people perceive AI’s involvement in decision-making quite differently compared to human involvement. Understanding these experiences and reactions is crucial for ensuring future systems are seen as morally acceptable, especially as AI’s role in society and medical care expands.
“Our research highlights the complex nature of moral judgments when considering AI decision-making in medical care. People perceive AI’s involvement in decision-making very differently compared to when a human is in charge,”
What’s next
Future research will likely explore methods to enhance trust and acceptance of AI in medical decision-making, focusing on transparency and explainability.
