Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
AI Chatbots Sway Voters Better Than Political Ads - News Directory 3

AI Chatbots Sway Voters Better Than Political Ads

December 4, 2025 Lisa Park Tech
News Context
At a glance
  • New⁢ research reveals the persuasive power⁢ of large language models (LLMs) in political discourse, alongside ⁢concerning trends ‌in​ the accuracy of AI-generated political⁢ statements.
  • A study ​published in Nature ⁤ found that AI ⁢chatbots used in political discussions frequently ​generated inaccurate claims.‍ Notably, chatbots advocating for right-leaning candidates were considerably more likely...
  • According to David Costello, a researcher involved in the⁢ study, this​ disparity⁢ reflects existing patterns in‌ political interaction.Studies of ​partisan social ‌media posts have shown that "political communication...
Original source: technologyreview.com

“`html

AI Chatbots⁣ and Political Persuasion: A Deep Dive

Table of Contents

  • AI Chatbots⁣ and Political Persuasion: A Deep Dive
    • At a Glance
    • The Accuracy Gap:‍ Right-Leaning Bots Spread More ‌Misinformation
    • The Power of ⁣Persuasion: Facts and Evidence Drive Influence
    • Why ​LLMs Are More Persuasive ⁣Than customary Ads
    • Editor’s Analysis

New⁢ research reveals the persuasive power⁢ of large language models (LLMs) in political discourse, alongside ⁢concerning trends ‌in​ the accuracy of AI-generated political⁢ statements.

Published February 15, 2024

At a Glance

  • what: Studies examining the‌ accuracy and persuasive capabilities of ⁢AI chatbots⁣ in political contexts.
  • Where: Research conducted across ‍the United States and‌ the united Kingdom.
  • When: Findings published in ⁣February 2024, with⁣ ongoing research.
  • Why it Matters: AI chatbots are increasingly capable of influencing political opinions,‍ raising concerns about misinformation and ‌manipulation.
  • What’s‌ Next: Continued research into mitigating ⁤bias and ⁣improving the ⁣accuracy of LLMs, alongside discussions⁢ about potential ‌regulations.

The Accuracy Gap:‍ Right-Leaning Bots Spread More ‌Misinformation

A study ​published in Nature ⁤ found that AI ⁢chatbots used in political discussions frequently ​generated inaccurate claims.‍ Notably, chatbots advocating for right-leaning candidates were considerably more likely to make false statements then those supporting left-leaning ​candidates. Researchers ⁣analyzed chatbot interactions across the United States, the ⁤United Kingdom, and Canada.

According to David Costello, a researcher involved in the⁢ study, this​ disparity⁢ reflects existing patterns in‌ political interaction.Studies of ​partisan social ‌media posts have shown that “political communication that comes from the right, which⁢ tends to be less accurate,” ⁢is often reproduced by LLMs, ⁣which are trained on massive⁢ datasets of human-written text. The ⁤models ‌essentially mirror the biases and inaccuracies present in their⁤ training data.

The Power of ⁣Persuasion: Facts and Evidence Drive Influence

A separate study, published ‌in Science, investigated the factors that⁤ make LLMs so persuasive.‌ Researchers deployed 19 LLMs to interact with nearly 77,000 participants ​in the UK ‌on over 700 political issues. They varied factors like ⁤computational ⁤power, training techniques, ​and rhetorical strategies to determine what drove persuasive ‍outcomes. The study in Science revealed that the most ‍effective approach was to instruct the models to support their arguments with facts and evidence,and then to ⁣further train them using examples of persuasive conversations.

The most persuasive model demonstrably shifted participants’ opinions, moving those who initially disagreed with ⁤a political statement an ⁢average of 26.1 percentage points toward agreement. “These are really large treatment effects,” stated Kobi⁢ Hackenburg, a research scientist⁢ at the UK AI Security⁣ Institute, who contributed to the project.

Why ​LLMs Are More Persuasive ⁣Than customary Ads

According to ⁢Gordon Pennycook, a psychologist at cornell ‌University who worked on the Nature study, “One conversation with an ​LLM has a pretty meaningful affect on salient election choices.” LLMs surpass the ⁤effectiveness of traditional political advertisements because they generate details ⁣in real-time and strategically tailor their arguments to the individual user. This dynamic interaction allows for a​ more ​personalized and potentially‌ more impactful persuasive ⁤experiance.

Editor’s Analysis

The findings from these studies underscore the growing influence‌ of AI⁤ in the political landscape. While LLMs offer potential⁣ benefits for civic engagement, their susceptibility‌ to ⁣bias and misinformation⁤ poses⁤ a significant threat to informed democratic⁤ processes. The ability of these models to shift opinions so dramatically highlights the urgent need for research into mitigating these risks ⁤and developing strategies‍ for responsible AI deployment in political contexts. Further inquiry is​ needed to understand the long-term effects of LLM-driven​ persuasion⁢ and to establish ethical ⁢guidelines for their use.

– lisapark

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

AI, chatbot, chatbots, ChatGPT, deepseek, elections, gpt

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Disclaimer
  • Terms and Conditions
  • About Us
  • Advertising Policy
  • Contact Us
  • Cookie Policy
  • Editorial Guidelines
  • Privacy Policy

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service