US Disrupts Kremlin-Backed AI-Powered Disinformation Campaign
WASHINGTON – The US Department of Justice announced on Tuesday the disruption of a sophisticated Russian propaganda operation leveraging artificial intelligence to spread disinformation across online platforms, primarily within the United States. This operation, orchestrated from within Russia and financially backed by the Kremlin, aimed to sow discord among Americans and manipulate public opinion on key geopolitical issues, including the ongoing war in Ukraine. This marks the first instance of the US dismantling a Russian bot farm enhanced by generative AI, highlighting the evolving nature of foreign influence operations in the digital age.
The scheme, initiated in 2022, involved the creation of a network of fictitious social media profiles masquerading as genuine American citizens. These fabricated personas were then used to disseminate pro-Kremlin narratives and disinformation, amplifying Russia’s perspective on the war in Ukraine while attempting to undermine US support for the embattled nation. The operation was reportedly overseen by a senior editor at RT, a Russian state-funded media outlet registered with the Justice Department as a foreign agent, and a Russian Federal Security Service (FSB) officer leading a private intelligence organization.
The operation highlights the growing concerns surrounding the potential exploitation of AI technology for malicious purposes, including interference in democratic processes. This incident comes amid heightened anxieties about the possible impact of AI on the upcoming 2024 US elections, echoing similar concerns raised during the 2016 presidential campaign when Russian interference through social media manipulation became a significant point of contention. The use of AI in this latest campaign represents a significant escalation in Russia’s disinformation tactics, enabling them to generate and disseminate propaganda at a scale and speed previously unseen.
The Justice Department revealed examples of the disinformation spread by the bot farm, including a video attributed to a fictitious Minneapolis resident featuring Russian President Vladimir Putin falsely claiming territories in Ukraine, Poland, and Lithuania as historical gifts from Russia. Another instance involved a fake US constituent responding to a federal candidate’s social media posts about the war with a video of Putin justifying Russia’s invasion. These examples showcase the carefully crafted nature of the disinformation, designed to exploit existing political divides and manipulate public perception.
As part of the disruption effort, the Justice Department seized two domain names and scrutinized 968 accounts on X (formerly Twitter), the primary platform targeted by the bot farm. A joint cybersecurity advisory issued by the US, Dutch, and Canadian authorities revealed that the software powering the operation, known as Meliorator, had been used to spread disinformation across multiple countries, including Poland, Germany, the Netherlands, Spain, Ukraine, and Israel. While initially focused on X, the software’s capabilities could potentially be extended to other social media platforms, raising concerns about the potential for wider dissemination of disinformation.
This incident underscores the ongoing challenge of combating foreign interference in the digital age. The use of AI-powered tools allows adversaries to automate and amplify disinformation campaigns, making it more difficult to detect and counter. The disruption of this operation serves as a crucial step in addressing this growing threat, but it also highlights the need for continued vigilance and collaboration between governments and tech companies to counter the evolving tactics of foreign actors seeking to undermine democratic institutions and manipulate public opinion.