It used to be that propaganda felt like something from history books, tucked away in grainy black and white footage. But today, the world has shrunk, and with billions of people now connected through social media, militaries, governments, and even individual groups have an unprecedented ability to shape, control, or even completely derail the truth. It’s like a digital battleground where facts can be twisted beyond recognition and outright lies can spread like wildfire. We’re seeing this play out in real-time, especially in the ongoing tensions, as a relentless information war rages on our screens. Imagine both sides – the Israeli Defense Forces (IDF) and the Islamic Revolutionary Guard Corps (IRGC) – beaming their versions of events directly into our pockets, through apps like Telegram and X. This constant flood of often contradictory updates is further muddied by well-meaning but sometimes misinformed citizen journalists, cleverly crafted fake reports, and the chillingly sophisticated power of artificial intelligence.
This digital deluge leaves us feeling overwhelmed and, ironically, often less informed. David Andrews, a Senior Policy Advisor at the Australian National University’s National Security College, paints a stark picture: “You get flooded with information and you think you’re being informed, but often you’re just getting pummeled with unverified data.” He hits on a crucial point – most of us simply aren’t equipped to sift through this mountain of information, and it’s not just mentally exhausting; it’s genuinely harmful. It chips away at our ability to think critically, which, as Andrews notes, is precisely what these deceptive efforts aim to do. At its heart, this is a perpetual struggle between powerful entities – Iran, the US, and Israel – all vying to be seen as the “good guys” fighting the “evil” opposition. And they all understand that social media isn’t just a communication tool; it’s a powerful weapon for swaying public perception.
When you scroll through your feeds, you’re not just seeing genuine news; you’re often exposed to a torrent of fake or unverified images, old footage presented as new, and completely fabricated reports claiming to be from the heart of the conflict. It’s truly disturbing. Think about a recent example: a chillingly realistic AI-generated video showing Dubai’s iconic Burj Khalifa engulfed in flames circulated widely on Instagram. Or another completely false report claiming that Israeli Prime Minister Benjamin Netanyahu had been killed. The sheer volume and sophistication of these AI-generated images and videos mean that unless you’re an expert, it’s incredibly difficult to tell what’s real and what’s not. What’s even more concerning is that there’s often no urgency from either side to correct these falsehoods. Instead, like a digital infection, these fabricated stories simply spread, gaining traction and further muddying the waters.
David Andrews emphasizes just how unprecedented this unchecked flow of disinformation really is. “I think the disinformation campaigns are definitely ramping up now in a way that we haven’t seen before, which is a function of the information environment that we’re in,” he explains. He believes this new landscape presents a golden opportunity that “our adversaries have taken advantage of… quite effectively.” The rise of artificial intelligence, in particular, casts a long shadow over this escalating conflict. As AI models become more advanced, their capabilities for deception grow exponentially. The Institute for War & Peace Reporting highlights this terrifying dual nature of AI, describing it as a “force multiplier, enhancing the speed, precision, and scale of military operations, while simultaneously enabling sophisticated, automated disinformation campaigns.” Andrews warns that as AI-based systems become more commonplace, “those risks only accelerate to an enormous degree.”
It’s not just external actors using these tactics; even militaries themselves have quietly engaged in spinning narratives, like fabricating victorious outcomes or exaggerating the impact of attacks. This kind of internal “fake news” can be incredibly potent within a regime, not necessarily to fool the outside world, but to manipulate its own population. Andrews explains, “It could be creating a generalized sense of uncertainty and distrust, a generalized sense that one side is doing much better than they are in reality.” But it can also be used to bolster morale and control the narrative domestically, with messages like, “‘Well, look, there’s these reports of how wonderfully our forces are doing and how the enemy has been defeated.'” The unfortunate reality is that “the horse has bolted” when it comes to AI and fake news; it’s a genie that’s already out of the bottle.
So, what can the average person do in this overwhelming information ecosystem? Andrews offers some practical advice for trying to separate fact from fiction. It requires taking personal responsibility for the information ecosystem you consume. “You have to read widely and try and find things that sort of complement the quick and reactive with the slower and more considerate,” he suggests. While instant updates have their place, relying solely on minute-by-minute social media feeds leaves us ill-equipped to truly understand what’s happening. The key, he advises, is to “look at the flow of social media for a little bit, but then you’ve got to sit back and then read reports that happen over multiple days to try and balance that out and apply that critical lens.” In essence, it’s about not just consuming information, but actively engaging with it, seeking out diverse sources, and allowing time for critical reflection before forming conclusions. Our mental well-being and our ability to understand the world depend on it.

