In a world increasingly shaped by digital narratives, the line between truth and fiction has blurred, creating a fertile ground for manipulation. The recent Iran war, for instance, became a stark reminder of this precarious landscape, with the Trump administration sounding the alarm about the Iranian regime’s alleged use of generative artificial intelligence to peddle misinformation. Imagine President Trump, his voice laced with concern, explaining how Iran, a nation he now realized was built on disinformation, was amplifying its deceptive tactics with the power of AI. It’s a “terrible situation,” he declared during a March event, painting a picture of an invisible war being waged in the minds of people. Defense Secretary Pete Hegseth echoed these concerns, pointing to fake footage of the USS Abraham Lincoln aircraft carrier engulfed in flames. “These AI-generated images are meant to make it look like something’s happening when the exact opposite is,” Hegseth explained, highlighting a chilling tactic to deceive not only the world but also Iran’s own populace. This wasn’t entirely new territory. Back in 2018, the U.S. State Department had launched the Iran Disinformation Project, an initiative designed to expose Iran’s false narratives. However, this program, which ironically ended up trolling journalists and academics, was shelved in 2019, and other vital offices dedicated to combating foreign influence have since been dismantled. It’s as if, in our effort to fight the unseen enemy, we’ve inadvertently disarmed ourselves, creating a vulnerability that adversaries can exploit.
While the discussion around Iran often centers on disinformation, experts suggest the regime’s influence operations are far more intricate and expansive. It’s not just about isolated incidents of fabricated content, though those certainly exist, like Iran-backed accounts spreading fake news. The bigger picture involves a sophisticated network where the wave of misinformation on social media can’t always be directly traced back to Tehran. Emerson Brooking, from the Atlantic Council’s Digital Forensic Research Lab, explains that Iran’s state propaganda is often disguised, woven through fake news websites, inauthentic social media accounts, and proxy media outlets. These entities subtly push the regime’s agenda under the guise of independent reporting, making it difficult for the average person to discern the truth. Brooking emphasizes that while the content is biased and covertly placed, “it is rarely wholly invented.” This highlights a crucial distinction: Iran isn’t always creating entirely new fabrications; instead, it’s adept at twisting existing narratives and presenting them through a manipulated lens. He describes Iran as a country that has honed “clandestine propaganda” into a core national security tool, a problem that, while serious, differs from outright “disinformation” as most people understand it. Furthermore, Mahsa Alimardani of Witness, a human rights organization, points out that Iran’s control over information extends beyond propaganda to include severe censorship and internet shutdowns, effectively silencing dissenting voices and controlling the flow of information within its borders.
Iran’s deep-rooted expertise in influence operations dates back to 2010, marking a sophisticated and sustained effort to shape global perceptions. These tactics encompass disinformation campaigns, overt propaganda, and subtle influence operations, often overlapping and blurring the lines between them. A clear example of disinformation, for instance, would be an official government account intentionally posting an AI-generated image to mislead people into believing a fictional event truly transpired. Influence operations, in their broader sense, aim to manipulate public opinion through a tapestry of inauthentic accounts and carefully curated, and often inaccurate, information. The Atlantic Council goes as far as to label Iran a pioneer in the development of digital influence capabilities. After the 2009 pro-democracy Green Movement, often dubbed the “Twitter Revolution,” Iran recognized the power of digital platforms and began meticulously building its influence infrastructure. By 2011, they had cultivated a formidable network, recruiting thousands of individuals trained in blogging, content creation, and multimedia design. These individuals, along with an army of bots and strategically created Facebook and Twitter accounts, were deployed to disseminate Iran’s message, all while concealing their state-sponsored origins. Darren Linvill, a professor at Clemson University and co-director of the Media Forensics Hub, eloquently describes their modus operandi: “These campaigns create social media accounts by hand, integrate those accounts into specific online communities, and then leverage the influence they gain over time to push an Iranian agenda and divide the populations of their geopolitical rivals.” The Media Forensics Hub recently uncovered a network of approximately 60 accounts on various platforms, linked to the Islamic Revolutionary Guard Corps, which had built followings under fabricated identities – think Latina women from Texas or individuals from the British Isles – to spread pro-regime messages and address divisive political issues like immigration and Scottish independence. Beyond these covert tactics, Iran also leverages an extensive state media apparatus that reports in Farsi, Arabic, and English, consistently pushing a pro-Iranian, anti-American, Saudi, and Israeli narrative, often interweaving outright disinformation into its broadcasts.
The recent war has indeed served as a grim canvas for Iran’s AI-powered disinformation, yet this portrays only a fragment of the full picture. The Iranian embassy in Austria, for example, posted a harrowing image of a bloody children’s backpack, attempting to link it to a tragic strike on a girls’ school in Minab. While initial investigations suggested the U.S. was responsible for the bombing, which reportedly claimed over 170 lives, predominantly children, the backpack image was later revealed to be AI-generated. This chilling example highlights the regime’s strategy of depicting an “oppressed yet militarily victorious” nation, a narrative central to its wartime message. Mahsa Alimardani poignantly observes the devastating irony: “the regime illustrated real deaths with fabricated imagery, and the identification of those fakes now provides ammunition for people denying the actual bombing occurred.” This tactic has historical precedent; during the 2025 Iran-Israel conflict, Iranian state media disseminated an AI-generated image of a downed F-35 jet to bolster public confidence in its defense capabilities. Tech giants like Meta are actively combating these operations, having recently dismantled an Iran-linked network of hundreds of Instagram and Facebook accounts that impersonated various personas, from political scientists to cartoonists. However, the sophisticated nature of these operations means new ones are constantly brewing. Alimardani has identified three distinct forms of misleading information: real events with a government-approved spin, state-generated AI disinformation, and accounts spreading regime narratives using AI, whose motivations, she notes, are often shrouded in ambiguity. The danger, as Max Lesser of the Foundation for Defense of Democracies points out, is that the proliferation of AI-generated content has weaponized skepticism, allowing people to dismiss authentic evidence as fake. The simple claim that something “looks AI-generated” has become a powerful, low-effort tool to discredit genuine documentation, undermining our ability to discern the truth.
Beyond the immediate conflict, Iran’s disinformation campaigns strategically target not only U.S. elections and other global issues but also its own population and the Iranian diaspora. The Islamic Republic artfully crafts its messaging within an ideological framework of anti-imperialism, solidarity with Palestine, and resistance to Western dominance. This resonates deeply with audiences in the Global South and with certain far-left groups in the West, amplifying Iran’s influence on a broader scale. Following the Hamas attacks on October 7, 2023, Microsoft observed a significant surge in Iran’s influence operations, demonstrating their agility in capitalizing on global events. A particularly chilling example involved an Iranian cyber group hijacking streaming services in the UAE, Canada, and the UK in December 2023, broadcasting an AI-generated anchor presenting fabricated news of Palestinian casualties. Closer to home for many, the U.S. has often seen these operations surface around elections. In 2024, for instance, an Iranian group, dubbed “Storm-2035,” operated four websites disguised as American news outlets, some even using AI to repackage legitimate news from other sources. The Islamic Revolutionary Guard Corps even attempted to hack the Trump campaign in 2024, seeking to leak stolen materials to journalists. However, Alimardani’s 2021 study, analyzing millions of tweets linked to Iranian influence operations between 2008 and 2020, revealed a surprising truth: the primary target was the Arab world, not the United States, and over 86% of these Arabic tweets failed to generate significant engagement. This highlights a critical, often overlooked aspect of Iran’s strategy: its most consistent target for information operations is its own population, particularly during periods of domestic unrest. During protests in 2026, for example, the regime actively discredited protestors and spread false claims of foreign instigation, often through internet shutdowns that ensured only government-approved narratives reached the public. This tactic was explicitly acknowledged by Iran’s government spokesperson on March 10, stating that internet access would be provided to “those who can carry our voice further.” Brooking adds that Iranians living outside the country are also targeted, with threats and surveillance becoming part of the same apparatus that runs social media campaigns, creating a pervasive climate of fear and control.
The irony of the current situation is stark: even as the White House increasingly attributes setbacks in the war effort to Iranian disinformation, the U.S. government’s own capacity to monitor foreign influence operations has been significantly curtailed. During Trump’s second term, critical offices like the Federal Bureau of Intelligence’s Foreign Malign Influence Task Force and the State Department’s Global Engagement Center were shut down. This, as Brooking aptly puts it, means “we have effectively made ourselves blind to this threat.” It’s a deeply concerning paradox where the warnings about foreign manipulation grow louder, but the tools and expertise to counter it are systematically dismantled. Imagine a doctor warning of an epidemic while simultaneously closing down the research lab and sending home the scientists. The consequences of this deliberate weakening are profound. It leaves the public vulnerable, making it harder to discern truth from falsehood, especially in an age where AI can generate believable but entirely fabricated content with chilling ease. The ability to identify influence operations, to understand their scope and their targets, is paramount in safeguarding democratic processes and informed public discourse. Without robust governmental bodies and experts dedicated to this task, the battle against disinformation becomes an uphill one, potentially leaving societies susceptible to manipulation and division. In a world where information is power, intentionally blinding ourselves to the subtle and sophisticated tactics of those who wish to sow discord is a dangerous gamble, risking not only our understanding of global events but also the very fabric of our communities.

