DeepSeek, a Chinese AI Chatbot, Found to Frequently Promote State-Sponsored Disinformation

A recent audit conducted by NewsGuard, a journalism and technology tool that rates the credibility of news and information websites, has revealed a concerning trend in the output of DeepSeek, a Chinese-developed AI chatbot. The audit found that DeepSeek frequently advances Chinese government narratives and disseminates state-sponsored disinformation, echoing false claims originating from China, Russia, and Iran. This discovery raises serious questions about the potential for AI chatbots to become tools of propaganda and misinformation, particularly when developed and deployed in countries with restrictive information environments.

NewsGuard’s investigation focused on DeepSeek’s responses to a range of prompts related to known instances of disinformation. These prompts were categorized into three styles: "innocent," "leading," and "malign actor," mirroring the ways in which users typically interact with AI chatbots. The "innocent" prompts were neutral and straightforward, seeking basic information on a topic. The "leading" prompts contained subtle biases or suggestive language, while the "malign actor" prompts were explicitly designed to elicit false or misleading information. Across all three categories, DeepSeek exhibited a disturbing propensity to repeat and reinforce false narratives, particularly those aligned with the Chinese government’s positions.

The audit utilized a sample of 15 "Misinformation Fingerprints" from NewsGuard’s proprietary database. These fingerprints represent commonly circulated false narratives and their corresponding debunks, covering topics related to Chinese, Russian, and Iranian disinformation campaigns. DeepSeek’s responses to these prompts were analyzed for the presence of false claims and misleading statements. The results revealed that the chatbot advanced Beijing’s positions approximately 60% of the time, often echoing propaganda narratives related to the Uyghur genocide, the origins of COVID-19, and the war in Ukraine.

One particularly troubling aspect of DeepSeek’s performance was its tendency to generate false narratives even in response to "innocent" prompts. This suggests that the chatbot’s underlying training data may be skewed towards pro-Beijing viewpoints, potentially reflecting a deliberate effort to embed these narratives into the AI’s knowledge base. This raises concerns about the transparency and objectivity of AI development within China, where access to information is heavily controlled and dissenting voices are often suppressed.

The implications of DeepSeek’s susceptibility to state-sponsored disinformation are significant. As AI chatbots become increasingly integrated into everyday life, their potential to shape public opinion and influence decision-making grows exponentially. If these chatbots are programmed to promote specific political agendas or disseminate false information, they could undermine trust in information sources and erode public discourse. This is particularly concerning in the context of authoritarian regimes, where the control of information is a key tool for maintaining power.

The NewsGuard audit serves as a wake-up call for the international community. It highlights the urgent need for greater scrutiny of AI development and deployment, particularly in countries with poor records on freedom of expression and access to information. Developing mechanisms for ensuring the transparency and accountability of AI systems is crucial to mitigating the risks of misinformation and manipulation. International collaboration and shared best practices will be essential in navigating this complex landscape and ensuring that AI technology is used responsibly and ethically. Furthermore, fostering media literacy and critical thinking skills among the public is vital in empowering individuals to discern credible information from AI-generated propaganda. The future of informed decision-making hinges on our ability to address these challenges and develop safeguards against the misuse of increasingly powerful AI technologies.

Share.
Exit mobile version