For years, bright minds in tech and politics have been sounding the alarm. They warned us that soon, powerful artificial intelligence tools would become so cheap and accessible that anyone could churn out fake images, videos, and audio. The fear was, these fakes would be so convincing they could easily trick voters and even tip the scales in an election. Back then, these synthetic creations were often a bit clunky, not quite believable, and pretty expensive to produce. It felt like a distant threat, especially when old-fashioned misinformation was already spreading like wildfire across social media with barely any effort or cost. The whole deepfake thing, powered by AI, always seemed to be just a year or two away, a problem for “future us” to deal with.
Well, “future us” is now. That “year or two away” has arrived with a jolt. Today’s sophisticated generative AI tools can whip up eerily accurate voice clones, incredibly realistic images, and videos in mere seconds, and at a fraction of the cost. Imagine these highly persuasive fakes being injected into the bloodstream of social media, where powerful algorithms can then catapult them far and wide, targeting highly specific groups of people. This isn’t just about bending the truth; it’s about weaponizing falsehoods to manipulate public opinion on an unprecedented scale. Suddenly, the dirty tricks of political campaigns aren’t just getting dirtier; they’re morphing into something far more insidious, threatening to fundamentally warp how we perceive reality and make crucial decisions as citizens.
The implications for the upcoming 2024 elections are frankly staggering and deeply unsettling. Generative AI isn’t just a tool for quickly drafting campaign emails, texts, or videos. It’s a potential engine for large-scale deception: misleading voters, impersonating candidates with uncanny accuracy, and ultimately undermining the very foundation of our electoral process. All this could unfold at a speed and scale we’ve never witnessed before. A.J. Nash, a cybersecurity expert from ZeroFox, put it starkly: “We’re not prepared for this.” He specifically highlighted the rapid advancements in AI’s audio and video capabilities, emphasizing that when these can be deployed broadly across social platforms, the impact will be enormous. It’s hard to shake a feeling of unease when experts warn us about something so profoundly disruptive.
AI experts can paint a rather chilling picture of what this could look like. Think about automated robocalls, featuring a candidate’s cloned voice, instructing people to vote on the wrong day. Or audio recordings suddenly surfacing, supposedly catching a candidate confessing to a crime or spouting hateful views, when in reality, they never uttered those words. Then there’s video footage, showing a public figure giving a speech or interview they absolutely never gave. And don’t forget the fake local news reports, looking entirely legitimate, falsely announcing a candidate has dropped out of the race. Oren Etzioni, founder of the Allen Institute for AI, added a powerful example: “What if Elon Musk personally calls you and tells you to vote for a certain candidate? A lot of people would listen. But it’s not him.” It highlights the dangerous blurring of lines between real and synthetic.
This isn’t just theoretical; it’s already happening. Former President Donald Trump, a 2024 candidate, has already shared AI-generated content with his social media followers. A recent manipulated video of CNN host Anderson Cooper, which distorted Cooper’s reaction to a town hall with Trump, was created using an AI voice-cloning tool and shared by Trump on Truth Social. We also saw a glimpse of this digitally manipulated future in a dystopian campaign ad released last month by the Republican National Committee. Following President Biden’s re-election announcement, the ad began with a slightly warped image of Biden and the text: “What if the weakest president we’ve ever had was re-elected?” It then cascaded through a series of AI-generated images: Taiwan under attack, boarded-up storefronts in the U.S. implying economic collapse, and even military vehicles patrolling streets amidst scenes of panic. The RNC acknowledged its use of AI, but as cybersecurity expert Petko Stoyanov noted, others, especially nefarious political campaigns and foreign adversaries, won’t be so transparent. He predicted that groups aiming to meddle with U.S. democracy will leverage AI to chip away at trust, making it increasingly difficult to discern truth from fabrication.
The threat extends internationally, too. What happens, Stoyanov asks, if a foreign entity — a cybercriminal group or even a hostile nation-state — uses AI to impersonate someone? What are the consequences, and do we have any way to fight back? He foresees a significant surge in misinformation from international sources. We’ve already seen AI-generated political disinformation go viral ahead of 2024, from a doctored video of Biden appearing to attack transgender people to AI-generated images of children supposedly learning satanism in libraries. Even images appearing to show Trump’s mugshot, which never happened, fooled some social media users. It’s a stark reminder that we need safeguards. Representative Yvette Clarke has introduced legislation that would require AI-generated campaign ads to be labeled, and for all synthetic images to include a watermark. Her greatest fear is that generative AI could incite violence and turn Americans against each other before the 2024 election. As she told The Associated Press, “People are busy with their lives and they don’t have the time to check every piece of information. AI being weaponized, in a political season, it could be extremely disruptive.” While some see AI as a positive “copilot” for campaign tasks like fundraising emails, the overwhelming consensus is that its potential for deception demands immediate attention and thoughtful regulation. We need guardrails, and fast, to protect the very fabric of our democratic process from this powerful, double-edged sword.

