AI’s Muted Impact on the 2024 US Election: Old Tricks Prevailed
The 2024 US presidential election was widely anticipated to be the first major electoral contest significantly impacted by the rise of readily accessible artificial intelligence (AI) tools. Concerns abounded regarding the potential for AI-generated deepfakes to spread misinformation, manipulate voters, and disrupt democratic processes. Initial fears were fueled by incidents like the AI-generated robocall mimicking President Biden’s voice, prompting the Federal Communications Commission to ban such calls. States scrambled to enact legislation regulating AI use in campaigns, requiring disclaimers on synthetic media, and providing resources to help voters identify AI-generated content. Experts painted a grim picture of potential damage, both domestically and internationally, with AI facilitating the creation and dissemination of sophisticated disinformation campaigns.
Despite these anxieties, the predicted deluge of AI-driven election interference never truly materialized. While misinformation certainly played a role in the election, the tactics employed were largely familiar: text-based social media claims, manipulated videos, and out-of-context images. The feared wave of sophisticated AI-generated deepfakes was conspicuously absent. Experts concluded that the 2024 election was not, in fact, "the AI election." Existing misinformation narratives, such as false claims about voter fraud and election integrity, were amplified through traditional means, demonstrating that AI was not necessary to deceive voters.
Several factors contributed to AI’s muted role. Proactive measures taken by technology platforms, policymakers, and researchers played a crucial part. Social media companies like Meta and TikTok implemented policies requiring disclosure of AI use in political ads and automatically labeling AI-generated content. OpenAI, the creator of ChatGPT and DALL-E, banned the use of its tools for political campaigns. These safeguards limited the potential for misuse and increased public awareness of AI-generated content. Furthermore, concerted efforts by election officials and government agencies to educate the public about AI manipulation may have inoculated voters against the potential impact of deepfakes.
Another significant factor was the efficacy of traditional disinformation tactics. Prominent figures with large followings could effectively spread false narratives without resorting to AI-generated media. Donald Trump, for example, repeatedly made unfounded claims about illegal immigrants voting, a narrative that gained traction despite being debunked by fact-checkers. This demonstrated that established methods of disseminating misinformation remained potent and readily available. The prevalence of "cheap fakes," authentic content deceptively edited without AI, further underscored the continued effectiveness of simpler manipulation techniques.
Additionally, some politicians strategically distanced themselves from AI, even using it as a scapegoat. Trump, for instance, falsely accused opponents of using AI to create damaging content, deflecting attention from legitimate criticisms. This tactic of blaming AI may have further diminished the technology’s perceived impact on the election. The fact that traditional methods of misinformation remained effective, coupled with some politicians’ attempts to discredit AI, likely reduced the perceived necessity and effectiveness of deploying AI-generated content for political manipulation.
While the impact of AI was less dramatic than anticipated, it wasn’t entirely absent. AI-generated content primarily reinforced existing narratives rather than introducing entirely new disinformation campaigns. For example, following false claims about Haitians eating pets, AI-generated images and memes related to animal abuse proliferated online. This amplified existing prejudice without creating a novel falsehood. Experts noted that AI-generated political deepfakes largely served satirical, reputational, or entertainment purposes, rather than driving widespread misinformation campaigns. When employed in political attacks, deepfakes often echoed established political rhetoric, exaggerating existing criticisms of candidates.
Despite the relatively limited role of AI in the 2024 election, experts caution against complacency. The technology is constantly evolving, and future elections may see more sophisticated and pervasive use of AI-generated misinformation. The efforts of social media companies to detect and label AI-generated content, while important, are not foolproof. Ongoing research and development of deepfake detection technologies remain crucial. The 2024 election served as a valuable learning experience, highlighting the need for continued vigilance and proactive measures to mitigate the potential impact of AI on democratic processes in future elections.