It’s a scary time to be online, especially when global conflicts erupt. Imagine browsing your social media, trying to understand what’s happening in a place like Iran, and instead of reliable news, you’re bombarded with fake videos of missile strikes, unrealistic drone attacks, and manipulated images. This isn’t science fiction; it’s the unsettling reality that Technology Secretary Liz Kendall is “deeply concerned” about. She’s not alone; politicians from all sides are witnessing a truly alarming surge in distorted and manipulated content, particularly around the recent tensions involving Iran. It’s a Wild West out there, where truth often gets lost in a storm of fabricated visuals and emotionally charged narratives designed to spread like wildfire.
The problem, as digital media expert Timothy Graham from Queensland University of Technology highlights, is the sheer breadth and sophistication of this misinformation. It’s no longer just poorly Photoshopped images. We’re talking about incredibly realistic, AI-generated videos of fake missile launches and explosions that are almost impossible to distinguish from real footage. Then there’s the more “low-tech” but equally damaging manipulation: old footage from other conflicts repurposed as current events, doctored screenshots of official statements, and even synthetic satellite imagery used to make false territorial claims. What unites all of these tactics is their intoxicating mix of emotional impact and shareability. These fakes are tailor-made for our scrolling habits, designed to hit hard and spread fast, leaving little time for critical thinking.
And who’s often the main suspect in facilitating this chaos? Elon Musk’s X, formerly Twitter, according to critics like Timothy Graham. He argues that the platform is fundamentally structured to reward emotionally charged, viral content, even if it’s false. Those fake missile strike videos gaining millions of views in mere hours aren’t an accident; they’re a symptom of the system working exactly as intended. While X does have a “community notes” system to flag misinformation, it’s often too slow, taking 15 to 24 hours to act, long after the misinformation has peaked and done its damage. To make matters worse, X’s revenue-sharing program inadvertently incentivizes this behavior, creating a “structural monetary reward” for high-engagement content, regardless of its accuracy. While X has stated it will ban users from profiting from AI war videos without proper labeling, it feels like a small step in a much bigger battle.
Mark Frankel, head of public affairs at Full Fact, echoes the sentiment of widespread distortion, noting that while manipulated content is common in conflicts, its scale around the Iran situation has been “enormous.” He warns that we’re living in “the age of AI,” and with it comes an inevitable “massive uptick in synthetic content online.” This means users are increasingly likely to wade through a sea of false and manipulated information before finding anything reliable. The ease with which AI tools can now be accessed and used, a significant difference from the early days of the Ukraine war, is a major factor in this surge. Beyond profit, some of this fake content is also fueled by foreign “bot farms” aiming to gain a political advantage. Imagine an organized effort to sow discord, pushing narratives that benefit one side over another, potentially even inspired by state actors looking to manipulate public opinion.
This alarming landscape leads us to the crucial question of accountability. Chi Onwurah, chair of the Commons’ Science, Innovation and Technology committee, rightly points out that social media companies can’t be left to “mark their own homework.” These platforms operate with a veil of secrecy, only sharing self-selected data with governments. Without independent access to comprehensive data, we’re flying blind. How can we possibly know the true extent of misleading content, whether it stems from foreign interference, or if moderation systems are actually effective? Without this basic information, regulating these powerful platforms effectively becomes an impossible task.
Ultimately, the call to action is clear. Chi Onwurah advocates for new legislation to regulate AI platforms, imposing duties on them to conduct risk assessments and report on “legal but harmful” content. This isn’t just about catching the obvious fakes; it’s about creating a fundamentally safer online environment where platforms are held accountable for the impact of the information they host and disseminate. The struggle against online misinformation is a complex one, involving technological advancements, human behavior, and the immense power of social media companies. But as Technology Secretary Liz Kendall and other experts highlight, ignoring it is no longer an option. The integrity of information, especially during times of crisis, depends on a concerted effort to push back against the tide of distortion and embrace transparency and responsibility.

