Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Bangladesh seeks UNDP’s help in tackling misinformation, disinformation

April 27, 2026

Caucasus Muslims’ Board: The statement of the Church of Echmiadzin is a manifestation of hostility and disinformation

April 27, 2026

Misinformation and Disinformation in Times of Unrest: Why Credible News Sources Matter More Than Ever

April 27, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Disinformation
Disinformation

Meta Study Reveals GenAI Accounts for Under 1% of Election-Related Misinformation in 2024 – Firstpost

News RoomBy News RoomDecember 5, 20243 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In a recent analysis conducted by Meta, the role of generative AI in spreading misinformation during the major elections of 2024 was examined across 40 countries, including significant regions like India, the US, and the EU. Contrary to earlier concerns that AI might exacerbate disinformation campaigns, the study revealed that AI-generated content accounted for less than one percent of flagged posts on Meta’s platforms. This finding suggests that current safeguards implemented by the company have been effective in mitigating the potential misuse of AI technology, thereby reducing the overall risk associated with misinformation during critical electoral periods.

Nick Clegg, Meta’s president of global affairs, addressed the findings, indicating that although there were some instances of malicious AI usage, the overall volume was minimal. He emphasized the sufficiency of the company’s existing policies and tools in managing the risks linked to AI content across various platforms such as Facebook, Instagram, WhatsApp, and Threads. The findings are particularly reassuring, as they highlight the effectiveness of preventative measures already in place, designed to combat disinformation and maintain the integrity of electoral processes in multiple regions.

In addition to addressing AI-related misinformation, Meta reported significant progress in countering election interference more broadly. The company successfully dismantled over 20 covert influence campaigns, classified as Coordinated Inauthentic Behavior (CIB) networks. While these operations did utilize generative AI for some content generation, Meta concluded that the technology did not notably amplify the scale or effectiveness of these campaigns, demonstrating the company’s proactive stance in preventing such disruptive activities.

Meta’s monitoring also extended to user activity, as nearly 600,000 attempts to create deepfake images of political figures—using their AI image generator known as Imagine—were blocked. This includes fabricated images involving prominent leaders such as President-elect Trump and President Biden. These numbers underscore a significant demand for stricter regulation of AI tools during critical events, affirming the need for ongoing vigilance against attempts to manipulate public opinion through deceptive imagery.

Reflecting on the experiences of content moderation during the COVID-19 pandemic, Clegg acknowledged that Meta may have initially adopted an excessively strict approach, resulting in the removal of many harmless posts. He noted that the uncertainty of the period contributed to the company’s high error rate in moderation, which unfortunately impacted user expression. This recognition underscores the challenges that Meta faces in balancing effective content moderation while safeguarding the free expression that it aims to promote.

The overall conclusions of this study indicate that the anticipated threat of AI-generated disinformation, particularly in the context of elections, may have been overstated for the moment. Through robust monitoring and strategic policy enforcement, Meta has managed to maintain a relatively controlled environment regarding AI misuse. However, the company recognizes the ongoing challenges posed by increasingly sophisticated AI tools, underscoring the importance of refining their approaches to uphold user trust and platform integrity in the future.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Caucasus Muslims’ Board: The statement of the Church of Echmiadzin is a manifestation of hostility and disinformation

Fake news, disinformation aggravated EndSARS crisis— Lai Mohammed

AI is Capturing Democracy with Fake Citizens, Scientists Warn

EndSARS Crisis Fueled by Fake News, Not Communication Failure — Lai Mohammed

CHT Rani Yan Yan Indigenous Rights Activist Bangladesh | Govt cautions Rani Yan Yan against spreading disinformation

Fake news, disinformation aggravated EndSARS crisis, not communication failure — Lai Mohammed

Editors Picks

Caucasus Muslims’ Board: The statement of the Church of Echmiadzin is a manifestation of hostility and disinformation

April 27, 2026

Misinformation and Disinformation in Times of Unrest: Why Credible News Sources Matter More Than Ever

April 27, 2026

Fake news, disinformation aggravated EndSARS crisis— Lai Mohammed

April 27, 2026

Ashu Reddy calls allegations in ₹9.35 crore case for pretext of marriage ‘false’; threatens legal action

April 27, 2026

40 years after Chernobyl, Stasi files reveal scale of Soviet misinformation

April 27, 2026

Latest Articles

AI is Capturing Democracy with Fake Citizens, Scientists Warn

April 27, 2026

NBI summons 3 vloggers over false info on Marcos’ health

April 27, 2026

Supreme Court Raises Questions on Live-in Relationship in False Promise of Marriage Case

April 27, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.