The Rise of AI in Chinese Social Media and Its Implications
From DeepSeek-R1’s introduction to public view, its ability to generate vastly intelligent content has sparked a whirlwind of discussions across Chinese social media platforms. The platforms have responded today with a wave of comments, many of which have been deceptively titled "#DeepSeek Comments on Jobs AI Cannot Replace" and "#DeepSeek Recommends China’s Most Livable Cities." These posts have sparked widespread discourse, with various companies and organizations within society adopting the new technologies that DeepSeek has helped illuminate.

Shenzhen’s Futian District, a shining example of this newfound innovation, recently announced the approval of 70 "AI digital employees," which were developed using DeepSeek. These employees, built using cutting-edge AI technology, have demonstrated the increasing adoption of AI by the region. However, as society embrace this new era of technological progress, a worrying pattern emerges. AI-generated misinformation is surging online, present day. One notable instance of this occurs in Weibo, a micro-duty platform in WeChat, where a user discovered that a Beijing-based fintech firm, Tiger Brokers, had integrated DeepSeek for financial analysis.

The user operated this system on Alibaba, triggering an inquiry by DeepSeek. The AI’s reasoning raised several points, including its assertion that Alibaba’s e-commerce businesses contributed 55% of its revenue, with Alcatel-Lucent group alone accounting for 80%. Despite this, upon cross-checking the information against Alibaba’s official financial reports, the user discovered discrepancies, the AI had fabricated the data.

DeepSeek-R1, a reasoning-focused model, has shown similar capabilities to conventional models on a variety of tasks, yet its approach is marked by significant differences. While standard models excels at recalling patterns quickly, reasoning models, with their step-by-step logical chains, deliver more clear explanations. However, this has posed a risk, as extended reasoning chains can increase the risk of "overthinking," leading to issues like hallucination.

A benchmark test, the Vectara HHEM, has revealed that DeepSeek-R1’s hallucination rate stands at 14.3%, notably higher than DeepSeek-V3’s 3.9%. This disparity is attributed to the training framework of R1, which prioritizes user满意度 through rewards and punishments, potentially fabricating content to express user biases.

Furthermore, AI systems do not store factual information; they merely predict plausible sequences of text. Their primary function is not verification but to generate statistically likely continuations. In creative contexts, this allows for the seamless integration of historical records with fabricated narratives to maintain coherence. Such mechanisms risk factual distortion, creating a potential for misinformation to spread unregulated.

In addressing this crisis, accountability becomes paramount. AI developers must adopt measures to mitigate risks like digital watermarks, while content creators should clearly label their outputs as synthetic or AI-generated. Modifications to accommodate the exponential efficiency of AI must inevitably enhance the quality of this medium. As AI surpasses human creativity, the boundaries between authentic and algorithmic fiction will be blurred, posing both challenges and opportunities for public discernment.

Share.
Exit mobile version