As the 2024 election approaches, Americans express heightened concerns regarding the implications of artificial intelligence (AI) on the electoral process, particularly its potential for manipulating information and spreading misinformation. A recent poll by the Pew Research Center indicated that a majority of citizens fear AI could facilitate the proliferation of false narratives during the campaign. In light of this anxiety, research into the applications of AI reveals that, while the technology can indeed be misused, its current role in the election is largely reflective of established electoral practices.
This evolving landscape began to take shape with the introduction of generative AI technologies, such as ChatGPT, which have become commonplace in providing voters with information on political candidates and election logistics. Instead of traditional information sources like Google, voters now turn to AI platforms for quick answers regarding critical electoral questions. While some AI tools offer accurate information, others, like ChatGPT, have demonstrated alarming inaccuracies by providing misleading or incomplete answers about voting procedures in crucial states. The risks associated with generative AI emphasize the importance of cross-checking information, as some users may inadvertently use these tools without fully understanding their limitations.
Deepfake technology, which allows for the creation of fabricated images, audio, and videos, has emerged as another concern in election-related discourse. An instance of this surfaced with an AI-generated robocall impersonating President Joe Biden during the New Hampshire primary, which prompted regulatory responses. While the FCC established rules around AI-generated robocalls, the utility of deepfakes is not entirely negative; they are being utilized creatively in political advertisement. A notable instance of this was a deepfake ad from a Louisiana mayoral campaign, demonstrating the potential for such technology to shape narratives in both honest and deceptive manners. Still, the use of deepfakes raises significant ethical questions about truth in political messaging and the potential for voter deception.
Amid mounting worries, some experts highlight a different avenue through which AI could be manipulated: the strategic distraction of election administrators through overwhelming public records requests. Historical patterns suggest that organizations with a vested interest in election integrity could exploit AI’s capabilities to deploy mass challenges to voter registrations, potentially complicating the administrative processes and compromising voter access. However, as of now, there is no conclusive evidence that such tactics are being implemented.
Another longstanding concern is foreign interference in American elections, which has resurfaced with renewed urgency as AI tools emerge. Following the revelations concerning Russian meddling in the 2016 election, the risk of external players using AI to undermine democratic processes is significant. Recent actions by the Department of Justice against Russian-affiliated social media accounts highlight the ongoing threat of disinformation campaigns powered by AI. Furthermore, reports suggest that countries like China may be utilizing AI-driven misinformation strategies targeted at discrediting U.S. political figures. This historical precedence reinforces the notion that foreign entities can influence U.S. elections, a reality exacerbated by the advent of AI technologies.
In response to the challenges posed by AI, various regulatory measures are being considered, recognizing the need for oversight in political applications of the technology. While federal regulations face difficulties, states are proactively introducing bans or restrictions on deepfake usage in political contexts. Additionally, some digital platforms are attempting self-regulation to mitigate potential harm. Notably, generative AI tools like Google’s Gemini have begun limiting their responses to electoral queries, reflecting an awareness of their influence. The growing public concern regarding AI may serve as a deterrent for campaigns to resort to deceptive practices, providing an organic mechanism to safeguard electoral integrity amidst a landscape increasingly dominated by AI influence. Nonetheless, the phenomenon labeled “AI panic” also risks diminishing public trust in the electoral process, presenting a double-edged sword for democracy as it navigates reformed armies of technology.