Bluesky Battles Emerging Disinformation Campaign Mirroring Tactics Used on X (Formerly Twitter)
The nascent social media platform Bluesky, a haven for many disillusioned former users of Elon Musk’s X, is grappling with its first major disinformation campaign. This campaign, echoing the pro-Russian "Matryoshka" or "Russian doll" operation that previously flooded X, utilizes sophisticated tactics, including AI-generated deepfakes, to disseminate pro-Kremlin narratives. The campaign, identified by the @antibot4navalny collective, which specializes in tracking influence operations, targets Western media and leverages fabricated content to present a favorable image of Russia, criticize Western support for Ukraine, and frequently denigrate French President Emmanuel Macron. This raises concerns about Bluesky’s vulnerability to coordinated disinformation campaigns and its ability to effectively combat them.
The disinformation campaign on Bluesky follows a familiar pattern observed in previous operations on other platforms. Dozens of posts have been identified that urge media outlets to verify disinformation, often imitating genuine media content. However, a new twist in this campaign involves the use of AI to impersonate universities and academics, lending a veneer of authority to the fabricated narratives. This tactic aims to exploit the trust placed in academic institutions and leverage their perceived credibility to spread disinformation among Bluesky’s user base. The use of deepfakes represents a significant escalation in the sophistication of these campaigns, making it increasingly difficult to discern genuine content from fabricated material.
The @antibot4navalny collective, in collaboration with AFP, has pinpointed approximately 50 "Matryoshka" posts on Bluesky. While some republish content already circulating on X, others appear to originate on Bluesky, indicating a deliberate effort to test the platform’s vulnerability and response mechanisms. Experts suggest this strategy allows the campaign organizers to gauge the reach and longevity of their disinformation before it is detected and removed. The use of Bluesky as a testing ground highlights the ongoing challenge faced by emerging social media platforms in effectively moderating content and preventing the spread of disinformation.
The campaign’s use of deepfakes to impersonate academics demonstrates a concerning advancement in disinformation tactics. One example involves a manipulated video purporting to show a professor from Aix-Marseille University discussing the negative impact of sanctions against Russia on the French economy. Analysis revealed the video to be a deepfake, with the original video containing no mention of Russia or sanctions. This tactic, coupled with the impersonation of universities, adds a layer of legitimacy to the disinformation, making it more persuasive and potentially more damaging. The "industrialization" of deepfake production suggests a growing capability to generate such content at scale, posing a significant challenge for content moderation efforts.
Another example involves a fabricated video seemingly filmed at Sunderland University in England, where students and teachers purportedly express positive views on Russia. This video, too, was debunked as a manipulation of original footage that made no reference to Russia. The recurring pattern of using academic settings and fabricated testimonials underscores the campaign’s calculated effort to exploit the perceived authority of universities. This sophisticated approach to disinformation highlights the need for robust detection and verification mechanisms, as well as public awareness campaigns to educate users about the potential for manipulated content.
Bluesky has taken action to remove a significant portion of the identified disinformation posts, indicating a commitment to addressing the issue. The platform encourages users to report problematic content and claims to have processed over 358,000 reports in 2023. While these efforts are commendable, experts argue that a more proactive approach is necessary to effectively combat sophisticated disinformation campaigns. The rapid evolution of disinformation tactics, particularly the use of AI-generated deepfakes, requires ongoing vigilance and the development of more advanced detection and prevention methods. Furthermore, fostering collaboration between social media platforms, researchers, and fact-checking organizations will be crucial in mitigating the impact of future disinformation campaigns.