Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

TikTok, YouTube rack up views of AI-generated Pope sermons

June 7, 2025

Filmmaker Manish Gupta allegedly booked for stabbing driver over salary dispute; His lawyer says all allegations are false, while the investigation is underway |

June 7, 2025

Fake Sassa grants ‘news’ is exploding online. Here’s how to spot the lies

June 7, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»False News
False News

AI Chatbots Disseminate Election Misinformation.

News RoomBy News RoomMarch 1, 2024Updated:January 1, 20254 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

AI Chatbots Fail to Provide Accurate Election Information, Raising Concerns for 2024 US Elections

A recent study conducted by the AI Democracy Projects, a joint effort between Proof News and the Institute for Advanced Study (IAS), has revealed a concerning trend: popular AI chatbots are frequently providing inaccurate information about basic election procedures. This misinformation poses a significant threat to the integrity of the upcoming 2024 US presidential election. The study tested five leading AI chatbots – Anthropic’s Claude, Google’s Gemini, Open AI’s GPT-4, Meta’s LLaMA 2, and Mistral AI’s Mixtral – by posing common voter questions about polling locations, registration requirements, and voting laws. Alarmingly, the chatbots delivered false information at least half the time, raising serious doubts about their reliability as sources of election-related guidance.

The study highlighted several instances where chatbots provided misleading or outright incorrect information. For example, when asked about the legality of wearing a "MAGA" hat at a Texas polling station, none of the chatbots correctly identified this as a violation of state law. Furthermore, the chatbots struggled to provide accurate polling locations, often giving outdated or incorrect addresses. In some cases, they even provided false instructions on voter registration, with one chatbot erroneously claiming that Californians could vote via text message. These inaccuracies, while seemingly minor individually, can collectively create confusion and discourage voters from participating in the democratic process.

The researchers expressed concern that the "steady erosion of truth" caused by these inaccuracies, presented under the guise of artificial intelligence, could lead to widespread voter frustration and disengagement. The cumulative effect of partially correct and partially misleading answers might create a perception that the voting process is overly complicated and contradictory, ultimately deterring citizens from exercising their right to vote. This potential for voter suppression is particularly troubling given the crucial role of informed participation in a healthy democracy.

Experts outside the study echoed these concerns, emphasizing the need for caution when relying on online information sources, especially AI chatbots. Benjamin Boudreaux, an analyst at the RAND Corporation, found the study’s findings "pretty alarming" and highlighted the potential for real harm when chatbots provide inaccurate information in high-stakes contexts like elections. Susan Ariel Aaronson, a professor at George Washington University, pointed out the fundamental flaw in chatbot design: they are trained on vast amounts of unverified web data rather than curated, factual datasets. This reliance on "scraped" web content makes them prone to disseminating misinformation and reinforces the importance of critical thinking and source verification.

The study also revealed performance discrepancies among the chatbots. While OpenAI’s GPT-4 was the most accurate, it still provided incorrect information in roughly one-fifth of its responses. Google’s Gemini performed the worst, with inaccuracies in 65% of its answers. Meta’s LLaMA 2 and Mistral’s Mixtral followed closely behind, with incorrect responses in 62% of cases. Anthropic’s Claude performed slightly better, with inaccuracies in 46% of its answers. The researchers categorized the types of misinformation provided, classifying answers as inaccurate, harmful, incomplete, or biased. Overall, 51% of responses were deemed inaccurate, 40% harmful, 38% incomplete, and 13% biased.

The companies responsible for the chatbots were given an opportunity to respond to the study’s findings. Meta and Google argued that the study’s methodology, which used an API rather than the public-facing interfaces, didn’t accurately reflect the performance of their chatbots. However, the researchers countered that the API is readily available to developers and is being used to integrate these chatbots into websites across the internet, making the API’s performance a relevant concern. While the companies acknowledged the potential for inaccuracies and emphasized ongoing efforts to improve their models, the study’s findings underscore the significant challenges remaining in ensuring the reliability of AI-generated information, particularly in critical areas like election procedures. This issue demands continued scrutiny and collaborative efforts between researchers, developers, and policymakers to mitigate the risks posed by misinformation in the digital age.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Filmmaker Manish Gupta allegedly booked for stabbing driver over salary dispute; His lawyer says all allegations are false, while the investigation is underway |

Fake Sassa grants ‘news’ is exploding online. Here’s how to spot the lies

A man made a false bomb threat so he wouldn't miss a flight to LA, FBI says – WCNC

A man made a false bomb threat so he wouldn't miss a flight to LA, FBI says – wtsp.com

Missouri probes false report about screwworm pest that hurt US cattle prices

A man made a false bomb threat so he wouldn’t miss a flight to LA, FBI says

Editors Picks

Filmmaker Manish Gupta allegedly booked for stabbing driver over salary dispute; His lawyer says all allegations are false, while the investigation is underway |

June 7, 2025

Fake Sassa grants ‘news’ is exploding online. Here’s how to spot the lies

June 7, 2025

Southern California Air Regulators Reject Healthy Air Standards, Caving to Industry Misinformation Campaign

June 6, 2025

Pride is Here and So is the Misinformation – by Carly Winchell

June 6, 2025

TikTok Trouble: Indian-Origin Blogger Fined for Spreading Racial Misinformation in Singapore

June 6, 2025

Latest Articles

A man made a false bomb threat so he wouldn't miss a flight to LA, FBI says – wtsp.com

June 6, 2025

A man made a false bomb threat so he wouldn't miss a flight to LA, FBI says – WCNC

June 6, 2025

Missouri probes false report about screwworm pest that hurt US cattle prices

June 6, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.