LA Protests: Chatbot Disinformation
- Artificial intelligence chatbots are confidently dispensing misinformation, raising concerns about thier impact on public discourse, particularly regarding elections. Despite platforms scaling back fact-checking programs, these AI tools often...
- Leah Feiger, senior politics editor at WIRED, noted instances where chatbots misattributed images, claiming they were from Afghanistan when they depicted protests in Los Angeles.
- The problem is compounded by the fact that many platforms have systematically dismantled their fact-checking programs, allowing more unchecked content to circulate.
AI Chatbots Confidently Spread Misinformation, Worrying Experts
Updated June 13, 2025
Artificial intelligence chatbots are confidently dispensing misinformation, raising concerns about thier impact on public discourse, particularly regarding elections. Despite platforms scaling back fact-checking programs, these AI tools often present inaccurate information with unwavering certainty.
Leah Feiger, senior politics editor at WIRED, noted instances where chatbots misattributed images, claiming they were from Afghanistan when they depicted protests in Los Angeles. This highlights a broader issue of AI hallucination,where chatbots generate false information and present it as fact.
The problem is compounded by the fact that many platforms have systematically dismantled their fact-checking programs, allowing more unchecked content to circulate. This creates an surroundings where AI-generated misinformation can thrive, potentially influencing public opinion and electoral outcomes. The spread of misleading AI-generated videos further exacerbates the challenge, as these videos can quickly accumulate views and disseminate false narratives.
Zoë Schiffer observed that while tools like google Search may have flaws, it’s frequently enough easier to discern unreliable sources compared to chatbots that confidently present false information. The unearned confidence of these systems makes it harder for users to identify inaccuracies.
It’s really concerning. I mean, it’s hallucinating. it’s fully hallucinating, but with the swagger of the drunkest frat boy that you’ve ever regrettably been cornered by at a party in your life.
Leah feiger, WIRED
A study by the Tow Center for Digital Journalism at Columbia University found that AI search tools struggle to admit when they lack knowledge, instead offering incorrect or speculative answers. This tendency to provide confident, yet inaccurate, responses raises serious questions about the reliability of these tools as sources of information.
What’s next
As the 2024 election approaches,experts emphasize the need for increased media literacy and critical thinking skills to combat the spread of AI-generated misinformation. Further research and progress are needed to improve the accuracy and transparency of AI chatbots, ensuring they serve as reliable sources of information rather than conduits for falsehoods.
