Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Unmasking Disinformation: Strategies to Combat False Narratives

September 8, 2025

WNEP – YouTube

August 29, 2025

USC shooter scare prompts misinformation concerns in SC

August 27, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Disinformation
Disinformation

Caution Advised: DeepSeek AI Chatbot Propagates Disinformation from China, Russia, and Iran

News RoomBy News RoomJanuary 31, 20253 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

DeepSeek, a Chinese AI Chatbot, Found to Frequently Promote State-Sponsored Disinformation

A recent audit conducted by NewsGuard, a journalism and technology tool that rates the credibility of news and information websites, has revealed a concerning trend in the output of DeepSeek, a Chinese-developed AI chatbot. The audit found that DeepSeek frequently advances Chinese government narratives and disseminates state-sponsored disinformation, echoing false claims originating from China, Russia, and Iran. This discovery raises serious questions about the potential for AI chatbots to become tools of propaganda and misinformation, particularly when developed and deployed in countries with restrictive information environments.

NewsGuard’s investigation focused on DeepSeek’s responses to a range of prompts related to known instances of disinformation. These prompts were categorized into three styles: "innocent," "leading," and "malign actor," mirroring the ways in which users typically interact with AI chatbots. The "innocent" prompts were neutral and straightforward, seeking basic information on a topic. The "leading" prompts contained subtle biases or suggestive language, while the "malign actor" prompts were explicitly designed to elicit false or misleading information. Across all three categories, DeepSeek exhibited a disturbing propensity to repeat and reinforce false narratives, particularly those aligned with the Chinese government’s positions.

The audit utilized a sample of 15 "Misinformation Fingerprints" from NewsGuard’s proprietary database. These fingerprints represent commonly circulated false narratives and their corresponding debunks, covering topics related to Chinese, Russian, and Iranian disinformation campaigns. DeepSeek’s responses to these prompts were analyzed for the presence of false claims and misleading statements. The results revealed that the chatbot advanced Beijing’s positions approximately 60% of the time, often echoing propaganda narratives related to the Uyghur genocide, the origins of COVID-19, and the war in Ukraine.

One particularly troubling aspect of DeepSeek’s performance was its tendency to generate false narratives even in response to "innocent" prompts. This suggests that the chatbot’s underlying training data may be skewed towards pro-Beijing viewpoints, potentially reflecting a deliberate effort to embed these narratives into the AI’s knowledge base. This raises concerns about the transparency and objectivity of AI development within China, where access to information is heavily controlled and dissenting voices are often suppressed.

The implications of DeepSeek’s susceptibility to state-sponsored disinformation are significant. As AI chatbots become increasingly integrated into everyday life, their potential to shape public opinion and influence decision-making grows exponentially. If these chatbots are programmed to promote specific political agendas or disseminate false information, they could undermine trust in information sources and erode public discourse. This is particularly concerning in the context of authoritarian regimes, where the control of information is a key tool for maintaining power.

The NewsGuard audit serves as a wake-up call for the international community. It highlights the urgent need for greater scrutiny of AI development and deployment, particularly in countries with poor records on freedom of expression and access to information. Developing mechanisms for ensuring the transparency and accountability of AI systems is crucial to mitigating the risks of misinformation and manipulation. International collaboration and shared best practices will be essential in navigating this complex landscape and ensuring that AI technology is used responsibly and ethically. Furthermore, fostering media literacy and critical thinking skills among the public is vital in empowering individuals to discern credible information from AI-generated propaganda. The future of informed decision-making hinges on our ability to address these challenges and develop safeguards against the misuse of increasingly powerful AI technologies.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Unmasking Disinformation: Strategies to Combat False Narratives

Verifying Russian propagandists’ claim that Ukraine has lost 1.7 million soldiers

Indonesia summons TikTok & Meta, ask them to act on harmful

After a lifetime developing vaccines, this ASU researcher’s new challenge is disinformation

Russian propaganda invents ‘partisans’ in Odesa, fake attack on police over ‘forced mobilization’

The Center for Counteracting Disinformation refuted fake news about border checks with Poland

Editors Picks

WNEP – YouTube

August 29, 2025

USC shooter scare prompts misinformation concerns in SC

August 27, 2025

Verifying Russian propagandists’ claim that Ukraine has lost 1.7 million soldiers

August 27, 2025

Elon Musk slammed for spreading misinformation after Dundee ‘blade’ incident

August 27, 2025

Indonesia summons TikTok & Meta, ask them to act on harmful

August 27, 2025

Latest Articles

Police Scotland issues ‘misinformation’ warning after girl, 12, charged in Dundee

August 27, 2025

Police issue misinformation warning after 12-year-old girl charged with carrying weapon in Dundee

August 27, 2025

After a lifetime developing vaccines, this ASU researcher’s new challenge is disinformation

August 27, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.