It’s a strange new world we’re living in, where even political leaders have to explicitly prove they’re alive and well just to counter bizarre online rumors. Take Israeli Prime Minister Benjamin Netanyahu, for example. He recently had to make a video of himself ordering a coffee in Jerusalem, a seemingly mundane act, but one with a significant purpose: to debunk viral AI-generated fakes claiming he’d been assassinated. The absurd rumors, amplified by Iranian state media and pro-regime social media, even went as far as to suggest, incredibly, that he had six fingers – a common giveaway of AI manipulation. This whole episode perfectly illustrates how the lines between reality and fiction are becoming increasingly blurred, especially when sophisticated technology is thrown into the mix.
This isn’t just about Netanyahu; it’s a symptom of a larger, more unsettling trend. At the same time Netanyahu was fending off digital death hoaxes, another peculiar scene played out online: an Iranian man was seen hugging a cardboard cutout of Mojtaba Khamenei, Iran’s newly appointed supreme leader. Khamenei, who stepped into his father’s shoes, has been conspicuously absent from public view since his appointment. This juxtaposition highlights how both absence and presence can be manipulated in the digital age. Shahriar Kaisar, an expert from RMIT University, perfectly sums it up: “battles are now fought not only on the ground, but on social media and media as well.” He points out that in the ongoing conflicts, particularly in the Middle East, AI-fakes are being weaponized as a form of psychological warfare, deliberately designed to erode trust. The challenge, he explains, is that “the distinction between truth and lie is very blurred. It’s very difficult to understand, you know, what to trust anymore.” This digital fog of war can make it impossible to discern genuine tragedy from fabricated narratives, and chillingly, can even be used to dismiss real atrocities as mere fakes.
The landscape of social media, especially since the recent escalations in the Middle East, has become a minefield of misinformation and disinformation. Different sides relentlessly push their own stories, often through outright fabrications. Newsguard, a US-based research organization, recently reported on the Iranian regime’s systematic disinformation campaigns, particularly their efforts to “exaggerate or entirely fabricate tales of Iran’s military prowess.” Imagine deepfake videos showing Iranian attacks on US bases, residential buildings in Tel Aviv, or even commercial buildings in Dubai – all completely manufactured. Other manipulative videos depict US and Israeli soldiers supposedly breaking down in tears, expressing their longing for home. Dara Conduit, a political science lecturer at the University of Melbourne, highlights the impact of Iran’s nationwide internet blackout, a tactic that gives the regime an almost unchallenged hold on the internal narrative. This allows them to effectively paint a picture of Iran as a victim of “Israeli and US conspiracy,” framing their actions as a strong and necessary fight back – even when the events are entirely fictional.
While AI-generated videos grab headlines, it’s crucial to remember that disinformation isn’t limited to cutting-edge technology. Authoritarian regimes have always been masters of propaganda, using state media to shape public perception for decades. Conduit notes that Iran, in particular, has been actively engaged in social media disinformation for at least a decade. In 2019, X (then Twitter) purged 4,800 accounts linked to Iranian regime-related misinformation. More recently, in February of this year, the Institute for Strategic Dialogue in the UK reported on the regime’s “wide propaganda campaign” on social media in response to nationwide protests. Beyond AI, we’re seeing a resurgence of “old-school” disinformation tactics. Social media influencers, even some Australians, have unwittingly or deliberately amplified misleading videos, such as one claiming to show attacks on CIA headquarters in Dubai, which was actually an unrelated fire from 2015. Even more chilling are staged videos, like those shared by Iranian journalist Masih Alinejad, exposing actors portraying distressed citizens on state television, grieving over fictional attacks and bombings. While independent verification is crucial, these examples underscore how easily emotions can be manipulated for political ends.
Ultimately, disinformation is a potent weapon in psychological warfare, and its primary goal isn’t always to convince you of a specific lie. As Conduit explains, “just one of the most powerful ways that disinformation can have an impact is by creating distrust.” By flooding the information sphere with conflicting, often contradictory, narratives, the aim is to overwhelm and confuse people, pushing them to either disengage entirely or to simply stop believing anything they hear. This state of pervasive skepticism, while seemingly harmless, can be profoundly damaging to democratic processes and societal cohesion. When trust in information sources crumbles, it becomes incredibly difficult to have informed public debate, hold power accountable, or collectively address real-world challenges.
So, how do we navigate this treacherous information landscape? Shahriar Kaisar rightly calls for a “collective effort” involving media, legislators, and the public to combat disinformation in its many forms. He acknowledges the difficulty of detecting sophisticated fakes, but points to positive developments: social media platforms are starting to integrate fact-checking and AI detection tools. He also highlights the need for robust legal frameworks, noting that while Australia has deepfake laws for pornographic images, similar protections for other types of deepfakes are still lacking. Research is also underway to “fight fire with fire,” developing AI tools to identify AI-generated fakes. For individuals, Kaisar offers a simple yet powerful mantra: “think before you share.” He advocates for the “ABC rule” – consider the “Actor” (their movements and posture), the “Background” (do objects make sense), and the “Context” (verify the source). This ongoing struggle, he concludes, is “an ongoing war between the good and the evil,” a battle for the very nature of truth in our digital age.

