Bluesky Grapples with Pro-Russian Disinformation Campaign Mirroring Tactics Used on X

The nascent social media platform Bluesky, a haven for many disillusioned former users of Elon Musk’s X (formerly Twitter), is facing its first major disinformation challenge. A pro-Russian campaign, reminiscent of the "Matryoshka" or "Russian doll" operation that previously plagued X, is employing sophisticated tactics, including AI-generated deepfakes, to disseminate pro-Kremlin narratives and criticize Western support for Ukraine. This new iteration of the campaign, however, exhibits a disturbing evolution, leveraging the guise of academic institutions to lend an air of authority to its deceptive content.

The campaign, identified by the disinformation tracking collective @antibot4navalny and analyzed by AFP, involves a pattern of posts questioning media narratives and promoting a pro-Russian viewpoint. Unlike the earlier Matryoshka campaign, which primarily focused on republishing content from X, this operation originates content on Bluesky, indicating a deliberate attempt to exploit the platform’s growing user base. The campaign also employs deepfake technology, fabricating videos featuring purported academics from universities like Aix-Marseille University in France and Sunderland University in England, to spread disinformation about the war in Ukraine and criticize French President Emmanuel Macron.

These deepfake videos, often featuring a fabricated expert speaking to the camera with university logos prominently displayed, are followed by manipulated images and stock footage to bolster the false narrative. The audio is often altered, twisting the original meaning to align with pro-Russian talking points. For example, a genuine video of a law professor discussing his department’s annual review was manipulated to suggest that France’s economic difficulties stemmed from sanctions against Russia. The campaign’s ability to “industrialize” the production of these deepfakes, as noted by Valentin Chatelet, a research associate at the Atlantic Council’s digital forensic research lab, demonstrates a concerning advancement in disinformation tactics.

The use of academic impersonation adds a new layer of sophistication to the campaign, exploiting the perceived authority of universities to enhance the credibility of the disinformation. This tactic, according to Peter Benzoni of the Alliance for Securing Democracy, represents an adaptation to Bluesky’s user base, which likely includes a higher proportion of academics and intellectuals compared to the general population. By masquerading as universities, the campaign aims to bypass critical scrutiny and amplify its message within a community that values intellectual rigor.

Bluesky, while acknowledging the issue and claiming to be actively combating disinformation, faces a significant challenge in addressing this evolving threat. The platform has reportedly processed over 358,000 reports of problematic content in 2023. However, experts argue that Bluesky’s approach remains largely reactive, relying on user reports and open-source investigations to identify and remove disinformation. The platform’s ability to proactively detect and counter such sophisticated campaigns, like the one utilizing deepfakes and academic impersonation, remains to be proven. Eliot Higgins, co-founder of Bellingcat, highlighted the characteristics of pro-Russian “bots” – fake profiles used to artificially boost the visibility of these posts.

This incident underscores the broader challenge confronting social media platforms in combating sophisticated disinformation campaigns. As technology advances, the production of deepfakes and other forms of manipulated media becomes increasingly accessible, making it easier for malicious actors to spread false narratives and sow discord. Platforms like Bluesky must develop more robust and proactive strategies to detect and counter such campaigns, including investing in advanced detection technologies and fostering collaboration with researchers and fact-checking organizations. The battle against disinformation requires a concerted and ongoing effort to safeguard the integrity of online information and protect users from manipulation.

Share.
Exit mobile version