As 2024 approaches, nearly 4 billion people in over 40 countries are expected to participate in elections, raising alarms about the potential impact of artificial intelligence (AI) on democratic processes, particularly in the realm of online disinformation. The early months of the year witnessed a surge of commentary speculating on the disruptive influence of AI technologies, especially concerns surrounding deepfake videos—realistic, computer-generated images and sounds that could mislead voters. This foreboding atmosphere is not unique; it reflects the broader “hype cycle” that accompanies new technologies as they are introduced into public consciousness. While initial predictions often veer towards alarmism or overly optimistic views, history shows that the true consequences of such technologies unfold more gradually, necessitating a focused and nuanced engagement with their actual implications.
In the lead-up to elections in the UK, a narrative emerged that downplayed the risks associated with AI-enabled disinformation. However, a deepfake crisis did not materialize, fostering a misperception that all was well with the electoral integrity during the campaign. Yet, serious instances of AI-driven disinformation did surface. Notably, during the campaign’s final weekend, Australian investigative journalists exposed a coordinated foreign effort utilizing Facebook to spread divisive and often racist content targeting UK voters, including illegal, unmarked paid advertisements featuring fake, AI-generated imagery intended to incite fear around immigration. As the UK approached polling day, these revelations followed closely behind government announcements of investigations, underscoring the pressing need for awareness and vigilance regarding online electoral threats.
Reports from Germany corroborated similar tactics, with social media platforms like X (formerly Twitter) being exploited to disseminate racist and anti-immigrant narratives during the UK elections. An environmental campaign group identified automated accounts spreading disinformation linked to climate change and migration, accruing millions of views. Conversely, while serious issues of disinformation were evident, the much-feared deepfake videos were relatively rare within these operations. Instead, AI-generated text, still images, and audio proved to be more operational and impactful, contributing to a systematic effort to manipulate public perception.
Political campaigns on both sides of the Atlantic are increasingly encountering the consequences of sophisticated AI manipulation. Instances arose where synthetic audio, mimicking political figures like Joe Biden and Sadiq Khan, disrupted local elections. Yet, it is noteworthy that much of this AI-generated content lacked realism and was often marked by a distinctly digital aesthetic. This decline in quality can be attributed, in part, to efforts by leading generative AI platforms that responded to public pressure, attempting to mitigate the risks of misrepresentation by refining their response capabilities to user prompts. Recent initiatives, such as the AI Elections Accord signed by major tech companies, signify the proactive steps being taken, albeit imperfectly, to combat the potential malign influence of digital misinformation.
Emerging research and community responses signified progress towards creating an informed digital landscape amid these challenges. Various governments, like the UK, initiated relevant guidelines for the use of generative AI during elections, while a Joint Election Security Preparations Unit was formed in early 2024 to enhance oversight. Media endeavors, including a Channel 4 documentary, underscored the importance of public awareness around deepfake technologies. This proactive approach reflects an increasing sophistication in tackling disinformation threats and signifies that the electorate may be less vulnerable to manipulation compared to years past due to heightened awareness and ongoing regulatory efforts.
AI technology is also being harnessed to combat misinformation more effectively. While online microtargeting remains underutilized in election campaigns, techniques like chatbot scripts have emerged to enhance canvassing strategies. Such innovations demonstrate a commitment to transparency and accountability among political actors. Additionally, AI tools are providing valuable support to human fact-checkers, enabling more timely responses to disinformation claims. Universities and research institutions have a pivotal role to play in this ecosystem, as they must move past the sensationalist narratives surrounding AI to facilitate effective regulatory responses. This work is critical as the world prepares for a series of consequential elections in 2024, where the interplay of technology and democracy continues to evolve.