AI-Generated Voices Emerge as Tools of Disinformation in Russian Influence Campaign
The rapid advancement of generative artificial intelligence (AI) has opened Pandora’s Box, unleashing a wave of potential misuses that range from academic dishonesty to artistic plagiarism. Now, a new and alarming application has emerged: state-sponsored disinformation campaigns. A recent report by the threat intelligence firm Recorded Future sheds light on a Russian influence operation, dubbed "Operation Undercut," which leveraged AI-generated voiceovers to spread misleading narratives aimed at eroding European support for Ukraine. This campaign highlights the growing threat of AI-powered disinformation and the urgent need for countermeasures.
Recorded Future’s investigation revealed a network of fake news videos targeting European audiences, employing fabricated stories to portray Ukrainian politicians as corrupt and question the efficacy of military aid to Ukraine. These videos, disseminated across multiple social media platforms, featured AI-generated voiceovers in several European languages, including English, French, German, and Polish. The seamless multilingual delivery, devoid of any discernible foreign accent, added a veneer of authenticity to the fabricated narratives, making them more palatable to the target audiences.
The report strongly suggests the involvement of commercial AI voice generation products, with particular emphasis on technology developed by ElevenLabs, a prominent AI startup. Researchers utilized ElevenLabs’ own AI Speech Classifier, a tool designed to detect audio generated by the company’s software, and found a match with the voiceovers used in the disinformation campaign. This finding implicates ElevenLabs’ technology in the propagation of manipulative content, raising serious questions about the company’s responsibility in mitigating the misuse of its products. ElevenLabs has yet to respond to requests for comment on these allegations.
The irony of the situation lies in the fact that the campaign’s orchestrators inadvertently exposed their reliance on AI by releasing some videos with human voiceovers featuring noticeable Russian accents. This stark contrast with the polished, accent-free AI-generated voiceovers further solidified Recorded Future’s conclusion regarding the use of AI in the campaign. The ability to rapidly generate voiceovers in multiple languages, a feature prominently advertised by ElevenLabs, proved instrumental in disseminating the disinformation across a wider European audience.
Recorded Future attributes Operation Undercut to the Social Design Agency, a Russian entity sanctioned by the U.S. government for operating a network of websites impersonating legitimate European news organizations. This agency, acting on behalf of the Russian government, strategically amplified the misleading content through a network of bogus social media accounts, creating an echo chamber of disinformation. Despite these sophisticated tactics, the campaign’s overall impact on European public opinion appears to have been minimal, according to Recorded Future.
This incident is not the first time ElevenLabs’ technology has been linked to potentially malicious activities. Earlier this year, their AI was reportedly used to create a robocall impersonating President Joe Biden, urging voters to abstain from participating in a primary election. This incident prompted ElevenLabs to implement new safety features, such as automatically blocking the voices of prominent political figures. However, Operation Undercut demonstrates that more robust measures are needed to prevent the misuse of their technology.
ElevenLabs officially prohibits the "unauthorized, harmful, or deceptive impersonation" of individuals and claims to employ a combination of automated and human moderation to enforce these policies. Nevertheless, the repeated instances of misuse raise concerns about the effectiveness of these measures. As the company continues its rapid growth, attracting significant investment and achieving a multi-billion dollar valuation, the pressure to address these vulnerabilities intensifies. The responsibility lies with ElevenLabs and other AI developers to prioritize the development of safeguards that prevent their powerful technologies from being weaponized for disinformation and manipulation. The future of generative AI hinges on striking a balance between innovation and responsible development, ensuring that these powerful tools are used for good and not exploited for malicious purposes.
The increasing sophistication of AI-generated content poses a significant challenge for detecting and combating disinformation. The ease with which realistic fake videos can be created and disseminated underscores the urgent need for advanced detection tools and public awareness campaigns. Governments, social media platforms, and technology companies must collaborate to develop effective countermeasures, strengthening defenses against AI-powered disinformation campaigns. The case of Operation Undercut serves as a stark warning of the potential consequences of unchecked AI misuse and the imperative to develop strategies to combat this emerging threat to democratic processes and international stability. The fight against AI-powered disinformation demands a concerted effort from all stakeholders to protect the integrity of information and safeguard against manipulation in the digital age.