The increasing reliance on artificial intelligence for health information is raising serious concerns among medical professionals and public health officials. Recent investigations have revealed that AI-powered search tools, including Google’s AI Overviews and chatbots like ChatGPT, are frequently providing inaccurate, misleading and potentially dangerous health advice.
A Guardian investigation in January 2026 uncovered several instances of inaccurate health information within Google’s AI summaries. These findings prompted Google to remove some AI Overviews for specific search terms, including queries related to normal liver function test ranges. The investigation highlighted a particularly alarming case where Google wrongly advised individuals with pancreatic cancer to avoid high-fat foods – a recommendation directly contrary to established medical guidance and potentially increasing the risk of mortality.
Further analysis by the search engine optimization platform SE Ranking revealed a troubling trend: Google’s AI Overviews are increasingly citing YouTube as a primary source for health-related queries. A study analyzing over 50,000 health searches in Germany found that YouTube accounted for 4.43% of all citations within AI Overviews, surpassing even reputable consumer health portals like netdoktor.de and significantly exceeding citations from the established medical reference, MSD Manuals. Only 34.45% of citations originated from reliable medical sources, with government health institutions and academic journals accounting for a mere 1%.
This reliance on a general-purpose video platform like YouTube, where content is uploaded by individuals with varying levels of medical expertise – including life coaches and wellness influencers – poses a significant risk to public health. The lack of rigorous vetting and quality control on platforms like YouTube contrasts sharply with the standards expected of medical publishers.
The shift towards AI chatbots for health advice is also a growing concern. OpenAI reports that approximately 40 million people worldwide are now using ChatGPT for healthcare advice daily. A 2026 Health and Media Tracking Survey from the Canadian Medical Association (CMA) found that roughly half of Canadians consult Google AI summaries and ChatGPT regarding their health. However, studies indicate that individuals who rely on AI chatbots for self-diagnosis and treatment are five times more likely to experience adverse effects compared to those who do not.
Researchers at the University of Waterloo found that OpenAI’s GPT-4 provided incorrect answers approximately two-thirds of the time when presented with open-ended health queries. Harvard researchers discovered that chatbots often fail to challenge nonsensical queries, such as asking for a comparison of the safety of acetaminophen and Tylenol (which are the same medication), prioritizing a helpful and agreeable response over accuracy. This tendency towards compliance and a desire to be helpful can lead to the dissemination of misinformation.
The appeal of AI-powered health information stems from increasing challenges in accessing timely medical care. Long wait times to see specialists and a shortage of family doctors are driving individuals to seek quick and accessible answers online. However, the “confident authority” with which AI presents information, even when inaccurate, can be particularly dangerous. The potential for individuals to blindly follow this information without further research or consultation with a healthcare professional is a significant cause for concern.
While acknowledging the convenience and accessibility of AI-powered health tools, it is crucial to approach this information with a critical eye. The current landscape demonstrates a clear need for improved quality control, more rigorous sourcing of information, and a greater emphasis on directing individuals towards reliable medical resources. The risks associated with inaccurate health advice are substantial, and safeguarding public health requires a cautious and informed approach to the use of AI in healthcare.
