AI in Healthcare: Keep Humans in the Loop
- As artificial intelligence expands, health systems must strategically balance its potential with safety and efficacy, according to John Halamka, MD, president of Mayo Clinic Platform.
- Halamka acknowledged the complexities of AI adoption, urging the healthcare industry to focus on practical applications.
- A key challenge is reconciling the tech sector's rapid innovation with healthcare's commitment to patient safety.
In “AI in Healthcare: keep Humans in the Loop,” discover why human oversight is crucial for the responsible and safe integration of artificial intelligence. John Halamka,MD,president of Mayo Clinic Platform,issues a key warning,emphasizing the risks,especially those related to generative AI which can produce errors.Predictive AI shows promise, but healthcare must balance innovationS speed with patient safety and ethical considerations.This piece from News directory 3 highlights the essential need for openness and rigorous risk management.Learn about the technical and cultural barriers to integrating AI, and how to navigate the legal and ethical complexities. What tangible outcomes will this focus achieve? Discover what’s next …
Human Oversight Needed for AI in Healthcare, Says Halamka
Updated February 27, 2025
As artificial intelligence expands, health systems must strategically balance its potential with safety and efficacy, according to John Halamka, MD, president of Mayo Clinic Platform. Speaking at the ViVe 2025 Conference, Halamka emphasized the need for transparency and risk management in AI implementation.
Halamka acknowledged the complexities of AI adoption, urging the healthcare industry to focus on practical applications. He stressed that AI’s value lies in solving specific operational and clinical challenges, rather than simply following trends.
A key challenge is reconciling the tech sector’s rapid innovation with healthcare’s commitment to patient safety. Halamka highlighted Mayo Clinic’s iterative approach, starting with small-scale trials before wider deployment. he cited their remote patient monitoring program, which began with a single patient and now serves 50,000, demonstrating comparable or better outcomes than conventional hospital care, with lower costs and higher patient satisfaction.
Halamka differentiated between predictive AI, which identifies patterns, and generative AI, which creates new content. While predictive AI is already aiding in areas like radiology,generative AI poses risks due to potential inaccuracies. He advocated for AI-augmented systems where clinicians retain decision-making authority.
To ensure reliability, Mayo clinic has developed foundation models with built-in safeguards and human oversight, assisting with tasks like X-ray interpretation and genomic analysis.
Transparency is crucial for building trust, Halamka noted. Tools like AI “data cards” and “model cards” provide insights into model growth and limitations, enabling clinicians to assess suitability for diverse patient populations. He cautioned against applying algorithms developed on one population to another without proper testing.
Effective AI governance requires frameworks for evaluating and monitoring AI tools. mayo Clinic uses a container system alongside the EHR to monitor every transaction, allowing for continuous performance assessment and risk mitigation.
The increasing integration of AI raises legal and ethical questions, Halamka said, including whether failing to use AI could be considered malpractice.He cited Mayo Clinic’s AI-driven pancreatic cancer detection algorithm, which identifies cancer earlier than human radiologists, and an AI model for endoscopy that substantially improves polyp detection rates.
Halamka identified technical barriers to AI adoption, including data access, workflow integration, and governance policies. Cultural barriers include building trust among clinicians, ensuring transparency, and addressing regulatory uncertainty.
He predicted that by ViVe 2026, discussions around AI in healthcare will focus on tangible outcomes, such as improved patient care, earlier diagnoses, and lower costs.
Halamka advised starting small, prioritizing transparency, balancing speed with safety, building clinician trust, and preparing for AI to become part of clinical standards.
“You can move fast and break things-as long as you understand the risks of breaking them,” Halamka said.
What’s next
As AI continues to evolve, healthcare organizations must prioritize responsible implementation, focusing on transparency, governance, and human oversight to ensure patient safety and build trust in these emerging technologies.
