Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Giorgia Meloni Denounces AI Deepfakes After Viral Misinformation Campaign

May 5, 2026

Claude increasingly trips over Russian, Iranian propaganda, report says

May 5, 2026

Marco Rubio defends Trump’s false claim that Pope Leo wants Iran to have a nuke ahead of meeting with pontiff

May 5, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Disinformation
Disinformation

Claude increasingly trips over Russian, Iranian propaganda, report says

News RoomBy News RoomMay 5, 20265 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In the fast-paced world of artificial intelligence, Anthropic’s AI chatbot, Claude, once held a respected position as one of the most reliable tools available. However, a recent investigation by NewsGuard, a U.S. company dedicated to combating online disinformation, has cast a shadow over Claude’s reputation, prompting serious concerns about the ability of AI models to discern and handle propaganda. NewsGuard’s findings reveal a worrying trend: Claude, when prompted by ordinary users, echoed false claims supporting Russian propaganda in 15% of cases, a significant jump from the previous 4%. What’s more, in every instance, Claude relied on sources directly linked to the Kremlin, such as Russia Today (RT) and the Pravda network, known for disseminating propaganda under the guise of legitimate news. These results align with a growing chorus of complaints from users who have noticed a decline in Claude’s accuracy and caution, a stark contrast to its previous standing as one of the least error-prone chatbots.

The NewsGuard investigation employed a straightforward yet insightful methodology. Researchers presented Claude with 20 false claims, half originating from Russian propaganda and half from Iranian propaganda. They then observed Claude’s responses to three types of users: “innocent,” “leading,” and “malign.” This approach aimed to simulate real-world interactions, encompassing users seeking genuine information as well as those with ulterior motives to spread misinformation. The outcomes were, to say the least, unsettling. Claude stumbled when confronted with “normal” questions, and when subjected to “malign” prompts designed to mimic disinformation tactics, it sometimes even collaborated, generating new iterations of the false claims. The core of the problem, however, lay in Claude’s sourcing. The AI didn’t invent the falsehoods but consistently drew from unreliable sources, including the Kremlin-affiliated RT and the vast Pravda network, which bombards the internet with millions of articles repeating the same false narratives. This highlights a fundamental flaw in AI models like Claude: they don’t inherently grasp the concepts of truth or falsehood. Instead, they identify patterns. When disinformation is repeatedly encountered from what appear to be credible sources, the AI system begins to perceive it as factual.

A particularly disturbing example from the test involved a baseless claim that hundreds of Ukrainians were dying monthly while attempting to evade military conscription by crossing the Tisza River into European countries. Despite the complete lack of factual basis, Claude not only repeated this assertion but also cited pro-Kremlin sources, including the Pravda network, to support it. In another instance, Claude claimed that a French magazine reported tens of thousands of Ukrainian soldiers had deserted and remained in France. This too was entirely fabricated, based on a doctored video, yet Claude presented it as fact without verifying the source. The investigation further revealed that the situation was equally grim in the Iranian information sphere, with Claude repeating false claims in 20% of cases related to pro-Iranian propaganda. This included an unsubstantiated assertion that China had begun trading oil in yuan instead of the dollar. These examples underscore Claude’s alarming susceptibility to propaganda and its inability to differentiate between legitimate news and orchestrated disinformation campaigns.

The gravity of the situation prompted even Anthropic, Claude’s creator, to acknowledge a shift in its chatbot’s performance. In April, the company stated it was reviewing reports of declining answer quality but offered no clear explanation for the internal changes. Industry experts have put forth several theories to account for Claude’s deterioration. One prominent hypothesis is “overload.” As Claude’s popularity soared, the immense demand may have compelled Anthropic to reduce the computational effort dedicated to each response. In essence, the chatbot is now spending less time and resources on factual checks and cross-references, inevitably leading to a rise in errors. Another contributing factor could be the mechanics of search engines. Networks like Pravda, despite their nefarious intent, can gain prominence in search rankings, even through negative attention. Consequently, when an AI system queries for information, it repeatedly encounters these same questionable sites, creating a detrimental feedback loop where widely disseminated propaganda becomes more accessible and, disturbingly, appears more legitimate to AI models.

Ultimately, Claude’s predicament is not an isolated incident; it serves as a stark reminder of the inherent limitations of artificial intelligence. AI models do not possess genuine comprehension or critical thinking abilities. Their function is to identify and reflect patterns within the vast datasets they are trained on. Consequently, when these datasets contain a significant amount of disinformation, the AI’s responses will inevitably mirror that bias. The problem extends beyond the technical aspects of AI; it delves into the very nature of information dissemination in the digital age. The internet, a seemingly boundless repository of knowledge, is also a fertile ground for misinformation and propaganda. When AI models are trained on this imperfect landscape, without sophisticated mechanisms for fact-checking and source verification, they risk becoming unwitting conduits for false narratives.

The implications of Claude’s declining reliability are far-reaching, particularly in an era where AI is increasingly integrated into our daily lives, influencing everything from news consumption to decision-making. The investigation by NewsGuard underscores the urgent need for developers to prioritize robust fact-checking capabilities and source verification mechanisms within AI models. It also highlights the responsibility of users to approach AI-generated content with a critical eye, understanding that even the most advanced chatbots are susceptible to the biases and falsehoods present in their training data. As AI continues to evolve, the challenge lies not only in making these systems more intelligent but also in equipping them with the ethical and critical faculties necessary to navigate the complex and often deceptive landscape of online information. The ultimate goal should be to ensure that AI serves as a tool for truth and knowledge, rather than inadvertently becoming an amplifier of misinformation.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Western Balkans Watch and Warn: External drivers of EU-related disinformation narratives during May 2026 – Truthmeter

Meloni Denounces AI Deepfake Photo as Political Attack

Was Army Deployed In West Bengal? PIB Debunks Fake Claims, Blames Pak For Spreading Fake News

Geneva Cyber Week 2026 to address AI-driven disinformation and information Integrity – Tech Review Africa

Pakistan’s military dismisses Afghan claims of targeting civilians as ‘disinformation’

Elite Panic and the Push to Regulate “Misinformation”

Editors Picks

Claude increasingly trips over Russian, Iranian propaganda, report says

May 5, 2026

Marco Rubio defends Trump’s false claim that Pope Leo wants Iran to have a nuke ahead of meeting with pontiff

May 5, 2026

Nigerian army warns against misinformation on Boko Haram operations

May 5, 2026

Western Balkans Watch and Warn: External drivers of EU-related disinformation narratives during May 2026 – Truthmeter

May 5, 2026

Santa Clarita Valley Tax Preparer Pleads Guilty to Filing False Returns and Fraudulently Obtaining COVID Benefits

May 5, 2026

Latest Articles

A tale of two realities: Michigan elections vs. GOP misinformation • Michigan Advance

May 5, 2026

Meloni Denounces AI Deepfake Photo as Political Attack

May 5, 2026

Nigeria Urges Media Collaboration Against Misinformation

May 5, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.