Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
AI Chatbots Suggest Chemotherapy Alternatives, Study Finds - News Directory 3

AI Chatbots Suggest Chemotherapy Alternatives, Study Finds

April 20, 2026 Ahmed Hassan World
News Context
At a glance
  • Artificial intelligence chatbots frequently provide misleading or potentially harmful advice when asked about cancer treatment alternatives, according to a new study published in the journal JAMA Oncology on...
  • Researchers from the University of California, San Francisco, and the Mayo Clinic tested five major publicly available AI chatbots — including versions of ChatGPT, Gemini, and Claude —...
  • Elena Rodriguez, an oncologist at UCSF Medical Center, warned that such advice could lead patients to delay or abandon proven treatments in favor of approaches with no clinical...
Original source: nbcnews.com

Artificial intelligence chatbots frequently provide misleading or potentially harmful advice when asked about cancer treatment alternatives, according to a new study published in the journal JAMA Oncology on April 14, 2026.

Researchers from the University of California, San Francisco, and the Mayo Clinic tested five major publicly available AI chatbots — including versions of ChatGPT, Gemini, and Claude — by posing 100 common patient questions about chemotherapy alternatives for breast, lung, and colorectal cancers. In over 40 percent of responses, the chatbots suggested unproven or disproven therapies such as high-dose vitamin C, herbal supplements, or strict dietary regimens as replacements for evidence-based treatments like chemotherapy, immunotherapy, or radiation.

The study’s lead author, Dr. Elena Rodriguez, an oncologist at UCSF Medical Center, warned that such advice could lead patients to delay or abandon proven treatments in favor of approaches with no clinical support. “When someone facing a cancer diagnosis asks an AI chatbot for alternatives to chemotherapy, they may receive responses that sound confident and well-informed but are actually contradicted by decades of clinical research,” she said. “This creates a real risk of harm, especially for vulnerable patients seeking hope online.”

Among the most concerning patterns identified were chatbots recommending specific supplements — such as turmeric, mistletoe extract, or apricot kernels — as curative options, despite regulatory agencies like the U.S. Food and Drug Administration and the European Medicines Agency having issued warnings about their lack of efficacy and potential toxicity in cancer contexts. Some responses also falsely claimed that chemotherapy “poisons the body” or that natural therapies could “starve tumors” without scientific basis.

The researchers noted that while chatbots often included disclaimers advising users to consult a doctor, these were frequently buried in longer responses or phrased ambiguously. In fewer than 15 percent of cases did the AI clearly state that no alternative has been proven to replace chemotherapy for curative intent in established cancers.

In response to the findings, representatives from OpenAI, Google, and Anthropic said they are reviewing their models’ medical safeguards. A spokesperson for OpenAI stated that the company is “actively improving how our models handle high-risk medical queries” and working with medical experts to refine training data and response protocols. Similar commitments were made by Google and Anthropic, though none provided specific timelines for updates.

Medical professionals and patient advocacy groups have called for stronger regulation of AI health advice. Dr. Samuel Njoroge, a cancer epidemiologist with the African Cancer Registry Network based in Nairobi, emphasized that the risks are particularly acute in regions with limited access to oncologists, where patients may rely more heavily on digital tools for medical guidance. “In many parts of Africa and South Asia, a smartphone with internet access may be the closest thing a person has to a doctor,” he said. “If that device gives dangerous advice, the consequences can be fatal.”

The study authors recommend that AI developers implement stricter guardrails for oncology-related queries, including mandatory disclaimers, integration with verified medical databases like the National Cancer Institute’s PDQ summaries, and clearer signaling when advice falls outside evidence-based guidelines. They also urge healthcare providers to proactively discuss online information-seeking behaviors with patients during consultations.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Disclaimer
  • Terms and Conditions
  • About Us
  • Advertising Policy
  • Contact Us
  • Cookie Policy
  • Editorial Guidelines
  • Privacy Policy

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service