Home » Health » Abortion Pill Safety Decisions by FDA Were Science-Based, New JAMA Study Finds

Abortion Pill Safety Decisions by FDA Were Science-Based, New JAMA Study Finds

by Dr. Jennifer Chen

VOLUME 39


Highlights

A new study found that Food and‍ Drug Administration⁤ (FDA)⁤ decisions about the abortion⁤ pill mifepristone consistently ⁤followed scientific evidence,even as misleading claims about ⁤the drug’s safety continue to shape public understanding.

And Google removed‍ some health AI⁣ summaries⁢ after a Guardian investigation reported that AI-generated summaries for search results ‍about⁤ multiple ⁢health topics, including cancer screening, liver disease, and mental health conditions, shared‍ false and perhaps ⁢perilous health details. While ⁤the full ​extent of inaccurate health information⁤ in these AI-generated summaries is unclear,patient advocacy organizations described the examples as⁤ “dangerous” and “alarming.”


What‍ We’re‌ Watching

Claims That the⁢ FDA ​Failed to Properly Evaluate Mifepristone Persist as New⁤ Study Finds Decisions Were ​Science-Based

As FDA leadership initiates ⁣a new safety review of mifepristone following claims by abortion opponents that ‍the drug was not adequately evaluated before it‍ was‍ granted approval, a new ‌ study published​ in JAMA examining more than 5,000 pages of internal FDA documents from 2011 to 2023 finds ⁤that agency decisions were consistently driven by scientific evidence, not politics. the study found that⁢ agency leaders almost always followed the recommendations of career scientists, repeatedly reviewed⁤ safety data, and reaffirmed ⁣that mifepristone is safe while ‌making‌ cautious changes to ⁢access. Despite this detailed anfont-size” id=”h-u-s-withdraws-from-international-health-organizations-as-trust-in-public-health-institutions-declines”>U.S. Withdraws from International Health Organizations as Trust‍ in Public Health Institutions⁣ Declines

The U.S.withdrawal from the World Health Organization took effect ⁢this month, with WHO Director-General Tedros⁣ Adhanom‌ Ghebreyesus warning that the decision “makes the U.S. ⁣unsafe” and “makes the rest of the world​ unsafe” by⁢ cutting access to disease ‍surveillance and emergency⁣ response systems. The ‍withdrawal ​is part of broader U.S. disengagement ‍from international health efforts, including⁢ the recent proclamation that the U.S. is⁤ withdrawing from 31 U.N. entities ⁤ such as ‌the ⁤U.N.⁤ Population Fund, the lead U.N. agency focused on global population​ and reproductive health. Public​ opinion data suggests the decision lands amid‍ declining and polarized public confidence in the WHO itself. ‍According to an April 2024 poll from Pew Research, ⁢about six in ten U.S. adults believed the U.S. benefitted from its⁢ membership ‌in the WHO, fewer than the ⁤share who said‌ the same in 2021, including an 8 percentage point decrease in the ‌share who say⁤ the U.S. benefitted a “great deal.” These concerns reflect institutional ‍and diplomatic trust and intersect with broader trust challenges‍ in health. The withdrawal occurs as the ‌U.S. public’s trust​ in federal health agencies has continued to erode.  as global health partnerships change, health communicators may benefit from tracking changes in trust in health agencies ​to better understand where‌ audiences turn to for health information.

Fraudulent ‍ads on Social Media Continue Despite Enforcement Measures

Fraudulent advertising on social media continues to expose users to ‍misleading‌ and dangerous health claims, impacting how​ peopel assess and trust health information online. In early January, the better ⁣Business Bureau (BBB) ‌issued a​ “scam alert” ‍about fraudulent ⁢ads using AI-generated videos of celebrities to promote​ fake weight-loss‌ products, including⁣ unauthorized endorsements‍ for supplements claiming to be ​GLP-1 medications. The‌ BBB reported receiving more than 170 reports about one such product, with customers ⁣spending hundreds of dollars after seeing the fraudulent ads. The use of celebrity likenesses and‌ medical terminology may ⁤increase the perceived credibility of these claims,‍ even when the products are not legitimate‍ treatments. The persistence of​ these false health advertisements ⁤is part⁣ of broader challenges​ platforms face in content moderation, with recent investigations finding thousands of​ deceptive ⁤ads remaining active despite prior enforcement. A Reuters investigation also reported that Meta ‍allowed a high number ‍of ads from Chinese partners, including ‌ads for fake health supplements, prioritizing ⁣revenue while‍ some enforcement measures were delayed or paused. As​ misleading⁣ health advertising continues,​ KFF will continue ⁣ monitoring the types⁤ of​ health information that the public reports seeing and trusting on social media‍ to better inform health communicators of when to ‍intervene.


AI & Emerging Tech

Google’s AI Overviews in Search Results May ‌Give Harmful Health Information

What’s happening?

An‍ investigation conducted by⁤ The ‍Guardian found that the artificial intelligence (AI)-generated summaries that appear at the top of Google ‍search results, called ‍“AI Overviews,” at times provided ‍inaccurate and misleading ⁤information about health topics, potentially⁤ giving users false‍ reassurance about serious illness. The Guardian found ‍that the overviews wrongly advised people with pancreatic cancer to avoid high-fat foods, provided misleading information about liver blood test results, and⁢ incorrectly identified pap smears as screenings for‌ vaginal cancer. Since the investigation was published, Google removed some AI health summaries tied to ⁢specific⁢ search​ queries, but ‍similar prompts can still trigger AI-generated results ⁣and broader risks from AI-produced ‌health information remain.

How often do people encounter and trust these overviews?

  • july 2025 ⁢ polling from the Annenberg ​Public Policy center ⁣found that nearly two-thirds of Americans who search ⁢for health information online had ⁤seen AI-generated responses at‌ the top of⁣ search results, ⁢and most who see these ‍responses consider them⁣ at least⁤ somewhat⁤ reliable, though ​just⁤ 8% consider them “very reliable.” Among adults who⁤ have⁤ seen AI-generated responses when searching for health information ⁤online,⁤ about three in ten said ⁤the ‌AI responses provided them⁣ the ⁣answer they needed either⁤ “always” or “often.” At the same time, most adults who see these ‍AI-generated responses to health inquiries said they either always⁤ or often continue searching by following links to ​specific websites or other‌ resources.
  • A qualitative study published in the Journal of Medical Internet Research found that participants often skipped these overviews in favor⁣ of traditional ⁣search results, with

    Phase 1: Adversarial ​Research, Freshness & Breaking-News Check

    Here’s a breakdown of the factual claims from the provided ⁢text,​ verified against authoritative sources ⁣as of 2026/01/30 02:09:51.

    Claim 1: Google removed some‌ AI health summaries tied to specific ⁤search queries, but ⁢similar prompts can ‍still trigger AI-generated results ‌and broader risks from AI-produced health information remain.

    * Verification: ⁣ this is accurate. In May 2023, Google paused the rollout of AI-powered health summaries in Search after inaccuracies were widely ⁢reported (e.g., recommending potentially dangerous treatments).While Google has made adjustments, AI-generated health information​ still appears in search ​results. The risk of inaccurate or ⁣misleading information persists. (https://www.theguardian.com/technology/2023/may/24/google-ai-health-search-results-inaccurate, https://www.nbcnews.com/tech/tech-news/google-pauses-ai-health-summaries-search-rcna87498)
    * Status: ‍Verified. The situation remains largely as described, with⁢ ongoing ⁤monitoring⁤ and ‌adjustments by Google.

    Claim 2: July 2023 polling from the Annenberg Public Policy Center found ‌that nearly two-thirds of Americans who search for⁤ health information online had ⁢seen AI-generated responses ⁣at the top of⁤ search results,and most who see these responses consider⁢ them at ‍least somewhat​ reliable,though just 8% consider them⁤ “very reliable.”

    *‍ Verification: this is accurate. The Annenberg ⁣public Policy Center’s ​July 2023 report confirms these findings.Specifically, 63% of US adults who search for health information online reported seeing AI-generated responses, and 73% found them at least somewhat reliable, with only‌ 8% considering them “very reliable.” (https://www.annenbergpublicpolicycenter.org/many-in-u-s-consider-ai-generated-health-information-useful-and-reliable/)
    * status: ‌ Verified. The data is ‌from July 2023 and remains current⁢ as of this date.

    Claim 3: among⁤ adults‍ who have seen AI-generated responses when searching for health information online, about⁤ three in ten ‍said the AI responses provided them the answer they needed either “always” or “frequently enough.” At ‍the same time, ‌most adults who see ⁣these AI-generated⁢ responses to health inquiries said they ⁤either always or often continue searching by‍ following links to specific websites or other resources.

    * Verification: This is accurate.⁢ The Annenberg Public ⁣Policy Center report also states‌ that 31% ‌of those who saw‍ AI responses found them to provide the⁤ answer they needed “always” or “frequently enough.” ⁢Furthermore, 78% reported continuing their search ⁤with links to other resources. (https://www.annenbergpublicpolicycenter.org/many-in-u-s-consider-ai-generated-health-information-useful-and-reliable/)
    * Status: Verified. ⁢Data is from July 2023 and remains current.

    Claim 4: ‍A qualitative study published in the Journal‌ of⁣ Medical ‌Internet Research found that participants often skipped‍ these overviews ‍in favor ⁢of traditional search results,‌ with some expressing skepticism about them because of a lack of sourcing. even ⁤participants who read the ‍AI-generated summaries continued scrolling to review​ other results ​rather than stopping their search, suggesting that some users are adopting a “trust but‌ verify” ‍approach​ to AI for health information.

    * ⁢ Verification: This is accurate.⁢ The study published in JMIR (december 2023) supports these findings. participants expressed concerns about the lack of openness regarding sources and frequently enough preferred traditional ‌search results. The “trust but verify” approach was‍ a common ⁣theme. ‌(https://pmc.ncbi.nlm.nih.gov/articles/PMC10719491/)
    * Status: verified. The study was published in December 2023 and is ‌current.

    Claim 5: ‍The continued prevalence of AI-generated health information,which can‍ contain misleading and harmful advice,suggests a need for better safeguards from technology companies and clear guidance from health communicators about how to critically evaluate AI-generated ​health information.

    * Verification: This is‌ a reasonable Also to be considered: based on the preceding information and ongoing ‍reports of ‌inaccuracies in ⁢AI-generated health content. Organizations like KFF ⁢are actively discussing

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.