Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

False plates, elusive suspects: Police still chasing leads in Pamela Ling case, say probe ‘will not stop’

July 2, 2025

AI videos push Combs trial misinformation, researchers say – Northeast Mississippi Daily Journal

July 2, 2025

Understanding toxic misinformation to stop the spread

July 2, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Disinformation
Disinformation

Caution Advised: DeepSeek AI Chatbot Propagates Disinformation from China, Russia, and Iran

News RoomBy News RoomJanuary 31, 20253 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

DeepSeek, a Chinese AI Chatbot, Found to Frequently Promote State-Sponsored Disinformation

A recent audit conducted by NewsGuard, a journalism and technology tool that rates the credibility of news and information websites, has revealed a concerning trend in the output of DeepSeek, a Chinese-developed AI chatbot. The audit found that DeepSeek frequently advances Chinese government narratives and disseminates state-sponsored disinformation, echoing false claims originating from China, Russia, and Iran. This discovery raises serious questions about the potential for AI chatbots to become tools of propaganda and misinformation, particularly when developed and deployed in countries with restrictive information environments.

NewsGuard’s investigation focused on DeepSeek’s responses to a range of prompts related to known instances of disinformation. These prompts were categorized into three styles: "innocent," "leading," and "malign actor," mirroring the ways in which users typically interact with AI chatbots. The "innocent" prompts were neutral and straightforward, seeking basic information on a topic. The "leading" prompts contained subtle biases or suggestive language, while the "malign actor" prompts were explicitly designed to elicit false or misleading information. Across all three categories, DeepSeek exhibited a disturbing propensity to repeat and reinforce false narratives, particularly those aligned with the Chinese government’s positions.

The audit utilized a sample of 15 "Misinformation Fingerprints" from NewsGuard’s proprietary database. These fingerprints represent commonly circulated false narratives and their corresponding debunks, covering topics related to Chinese, Russian, and Iranian disinformation campaigns. DeepSeek’s responses to these prompts were analyzed for the presence of false claims and misleading statements. The results revealed that the chatbot advanced Beijing’s positions approximately 60% of the time, often echoing propaganda narratives related to the Uyghur genocide, the origins of COVID-19, and the war in Ukraine.

One particularly troubling aspect of DeepSeek’s performance was its tendency to generate false narratives even in response to "innocent" prompts. This suggests that the chatbot’s underlying training data may be skewed towards pro-Beijing viewpoints, potentially reflecting a deliberate effort to embed these narratives into the AI’s knowledge base. This raises concerns about the transparency and objectivity of AI development within China, where access to information is heavily controlled and dissenting voices are often suppressed.

The implications of DeepSeek’s susceptibility to state-sponsored disinformation are significant. As AI chatbots become increasingly integrated into everyday life, their potential to shape public opinion and influence decision-making grows exponentially. If these chatbots are programmed to promote specific political agendas or disseminate false information, they could undermine trust in information sources and erode public discourse. This is particularly concerning in the context of authoritarian regimes, where the control of information is a key tool for maintaining power.

The NewsGuard audit serves as a wake-up call for the international community. It highlights the urgent need for greater scrutiny of AI development and deployment, particularly in countries with poor records on freedom of expression and access to information. Developing mechanisms for ensuring the transparency and accountability of AI systems is crucial to mitigating the risks of misinformation and manipulation. International collaboration and shared best practices will be essential in navigating this complex landscape and ensuring that AI technology is used responsibly and ethically. Furthermore, fostering media literacy and critical thinking skills among the public is vital in empowering individuals to discern credible information from AI-generated propaganda. The future of informed decision-making hinges on our ability to address these challenges and develop safeguards against the misuse of increasingly powerful AI technologies.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Three things to know about foreign disinformation campaigns

Morocco fights against disinformation

Legal watchdog sues State Dept for records labeling Trump, cabinet as ‘Disinformation Purveyors’

EU Disinformation Code Takes Effect Amid Censorship Claims and Trade Tensions

Milli Majlis Commission issues statement on disinformation campaign against Azerbaijan

A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a ‘Content Explosion’

Editors Picks

AI videos push Combs trial misinformation, researchers say – Northeast Mississippi Daily Journal

July 2, 2025

Understanding toxic misinformation to stop the spread

July 2, 2025

AI misinformation surrounding Sean Combs's sex trafficking trial has flooded social media sites. – IslanderNews.com

July 2, 2025

Three things to know about foreign disinformation campaigns

July 2, 2025

Opinion: RFK Jr.’s vaccine panel is turning misinformation into policy

July 2, 2025

Latest Articles

Researchers Say AI Videos Fueling Diddy Trial Misinformation

July 2, 2025

Combating false information on vaccines: A guide for risk communication and community engagement teams – PAHO/WHO

July 1, 2025

Morocco fights against disinformation

July 1, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.