In the fast-paced and ever-evolving world of political campaigns, a new and somewhat unsettling tactic has emerged, sending ripples through the political landscape of Massachusetts. Imagine a world where voices can be mimicked, and words can be put into people’s mouths, all through the invisible hand of artificial intelligence. This isn’t science fiction anymore; it’s the reality a Republican candidate, Brian Shortsleeve, introduced to the Massachusetts gubernatorial race. On Instagram, he unveiled a fictitious radio advertisement that, with a chilling accuracy, featured an AI-generated voice eerily similar to that of the incumbent governor, Maura Healey. Shortsleeve’s intention, as declared in his audacious caption, was to portray “what one of her radio ads might sound like – if she was honest.” This wasn’t merely a political jab; it was a digital doppelgänger, speaking words Healey never uttered, but which Shortsleeve wished she would. The fabricated ad, released precisely on the day Governor Healey launched her re-election bid, painted a bleak picture of Massachusetts under her leadership. The AI-Healey, with a voice indistinguishable from the real one, “proudly” boasted of high electricity rates, the state being the second most expensive for retirement, a staggering loss of over 5,000 employers, and the outflow of 12,000 private sector jobs to Republican states. It even went further, claiming Massachusetts was “50th out of 50 in job growth” and among the bottom five for “one-way U-Haul customer exits,” a cynical nod to people leaving the state. Shortsleeve’s campaign readily admitted to using AI for this audacious move, pushing the boundaries of political campaigning into uncharted technological territory and sparking a debate that extends far beyond the borders of Massachusetts.
The reaction to Shortsleeve’s AI-generated ad was swift and sharp, highlighting the growing tension between traditional political discourse and emerging technological capabilities. The Healey campaign, instead of directly addressing the fabricated ad, redirected inquiries to the Massachusetts Democratic Party, underscoring their belief that Shortsleeve’s actions were a mere distraction from a more substantive debate. Steve Kerrigan, the chair of the state’s Democratic Party, minced no words, dismissing Shortsleeve as “SlowZone Shortsleeve” and accusing him of fabricating “alternative realities” to create a semblance of viability in the race. Kerrigan forcefully argued that Shortsleeve was misleading voters and would ultimately be a “rubber stamp on President Trump’s harmful agenda,” seeking to discredit not only the AI ad but also Shortsleeve’s broader political platform. However, Shortsleeve’s spokesperson, Holly Robichaud, stood firm, defending the ad as a “parody” while simultaneously insisting on the truthfulness of its underlying message. She asserted that despite its artificial nature, the ad accurately highlighted Governor Healey’s “failed record of killing jobs and making Massachusetts the most expensive state in the nation.” This exchange vividly illustrates the nascent stage of engagement with AI in politics: one side attempting to dismiss it as a deceptive tactic, the other framing it as a novel, albeit audacious, method of conveying inconvenient truths. The core of the conflict isn’t just about whether the ad was real, but about the implications of allowing such technologically advanced forms of communication to shape public perception and political narratives.
This incident in Massachusetts is not an isolated one; it’s a prominent symptom of a nationwide trend where generative AI is rapidly infiltrating the electoral process. From crafting political advertisements that feel unsettlingly real to generating speeches that mimic human eloquence, AI is reshaping the very fabric of political communication. The National Conference of State Legislatures (NCSL) has been closely tracking this phenomenon, observing that the “deepfake” technology—the very tool Shortsleeve employed—is at the forefront of this transformation. For those unfamiliar, deepfake technology uses sophisticated AI to manipulate audio or video, creating convincing yet entirely false depictions of individuals doing or saying things they never did or said. It’s a powerful tool, capable of swapping faces, perfectly lip-syncing words, and creating scenarios that are incredibly difficult to distinguish from reality. The terrifying ease with which these deepfakes can be created, thanks to advancements in machine learning and accessible technical tools, has spurred a desperate need for new legislation to curb their potential for misuse. This technological leap presents a profound challenge to the integrity of democratic elections, as voters now face the daunting task of discerning truth from highly sophisticated artificial falsehoods, fundamentally altering the landscape of political trust and accountability.
In response to the escalating threat posed by deepfakes and AI in political campaigns, a significant portion of the United States has already taken legislative action. A commendable twenty-six states have enacted laws specifically designed to regulate the use of deepfakes in political contexts, a testament to the urgency with which this issue is being addressed. The NCSL categorizes these legislative efforts primarily into two approaches: outright prohibitions and mandatory disclosures. States like Minnesota and Texas, for instance, have chosen the path of prohibition, banning the publication of political deepfakes within a specified number of days leading up to an election, aiming to prevent last-minute, unverifiable attacks that could sway voters. However, the path to regulation is fraught with complexities, as demonstrated by California’s experience. One of its pioneering laws, which sought to regulate deepfakes, was unfortunately struck down in August 2025 on First Amendment grounds. The court found its provisions—which prohibited any speech “reasonably likely” to harm a candidate’s electoral prospects and imposed burdensome disclaimer requirements even on satire and parody deepfakes—to be an infringement on free speech. This legal setback in California highlights the delicate balance lawmakers must strike between protecting electoral integrity and upholding constitutional rights, a challenge that is likely to define the future of AI regulation in politics.
Back in Massachusetts, the legal framework governing campaign finance and political advertising appears to be lagging behind the rapid advancements in AI technology. According to the Office of Campaign and Political Finance, the state’s current laws are conspicuously silent on AI or deepfakes, touching only lightly on the requirement to disclose ad expenditures. There’s no specific mention or regulation concerning video, audio, or web advertisements within the election laws overseen by the Elections Division in the secretary of state’s office. This regulatory void creates an open field for the unfettered use of AI-generated content, raising serious concerns about potential misinformation and manipulation. Recognizing this critical gap, State Senator Michael Moore stepped forward last year, filing a bill (S 44) specifically aimed at “protecting against election misinformation.” The legislative process has seen progress, with the Senate side of the Joint Committee on Advanced Information Technology, the Internet and Cybersecurity, giving a unanimous 6-0 approval to a redraft of Moore’s bill (S 2631) in October. This bipartisan support, notably including Republican Senator Peter Durant alongside five Democrats, signals a collective acknowledgment of the problem and an earnest attempt to address it. The bill is now navigating through the Senate Ways and Means Committee, holding the promise of bringing Massachusetts’ election laws into the 21st century.
The proposed legislation in Massachusetts meticulously defines what constitutes a “materially deceptive election-related communication,” aiming to draw clear lines around what is permissible and what is not. It focuses on media containing “verifiably false information” concerning crucial election details such as dates, times, places, voting procedures, deadlines, election certifications, or endorsements of candidates or ballot initiatives by political entities. This precise language seeks to target deliberate attempts to mislead voters on fundamental aspects of the democratic process. Crucially, the bill includes a vital carveout for “materially deceptive election-related communications that constitute satire or parody.” This provision acknowledges the importance of comedic and satirical expression in political discourse, a lesson perhaps learned from the challenges faced by California’s earlier legislation. Furthermore, the bill empowers individuals whose voice or likeness is digitally manipulated in a deceptive election communication to seek “injunctive or other equitable relief prohibiting the distribution of such communication.” This empowers individuals to fight back against the unauthorized and misleading use of their identity. The ongoing emergence of AI-generated political content, from videos to photos appearing on social media, underscores the urgent need for Massachusetts lawmakers to act. The decision before them is not just about updating existing laws; it’s about courageously wading into the complex and rapidly evolving waters of regulating advanced technology to safeguard the integrity of their electoral system and ensure that truth remains the bedrock of their democracy.

