It feels like we’re living in a world where things that aren’t real can look incredibly, shockingly real. We’re talking about AI-generated videos, the kind that are popping up on platforms like Elon Musk’s X (formerly Twitter). Imagine seeing a video that shows American soldiers captured by Iran, or a once-bustling Israeli city reduced to rubble, or even U.S. embassies engulfed in flames – and it all looks so convincing that your gut reaction is pure alarm. This isn’t just about a few doctored photos anymore; it’s a flood of lifelike deepfakes, hitting us at a time when the Middle East is already in such turmoil. It’s a stark reminder of how difficult it’s becoming to separate fact from fiction, especially with the sheer volume of these AI-created images and videos. Researchers are saying this is unlike anything they’ve seen in previous conflicts, and it’s leaving many people scrolling through their feeds utterly confused about what’s actually happening in the world.
The problem has become so significant that X, under Elon Musk, felt pressured to act. They recently announced a new policy: if you’re a creator in their revenue-sharing program and you post AI-generated war videos without making it clear they’re artificial, you’ll be suspended from getting paid for 90 days. Repeat offenders, according to X’s head of product Nikita Bier, will lose their monetization privileges permanently. On the surface, this sounds like a positive step. X has been heavily criticized in the past for becoming something of a wild west for misinformation, especially since Musk took over. So, for a platform notorious for its hands-off approach, this policy shift felt significant. Even a senior State Department official, Sarah Rogers, praised it, seeing it as a good complement to X’s existing Community Notes system, which allows users to fact-check posts collaboratively. The idea is that by making it harder to profit from fake content, there’s less incentive to create and spread it.
However, anyone who works to counter disinformation is looking at this with a healthy dose of skepticism. Joe Bodnar from the Institute for Strategic Dialogue, for instance, points out that despite the new policy, his feeds are still absolutely swimming in AI-generated content about the war. He told Agence France-Presse (AFP) that it doesn’t seem like the creators of these misleading videos and images have been put off at all. He even highlighted a premium, “blue check” X account – the kind that’s eligible for monetization – that shared an AI clip of an Iranian “nuclear-capable” strike on Israel. What’s particularly jarring is that this fake video actually racked up more views than Nikita Bier’s official announcement about cracking down on AI content. It really makes you wonder if the policy is having any real impact on the ground, or if it’s just a drop in the ocean compared to the overwhelming tide of deception.
Part of the issue seems to stem from X’s own business model, which, ironically, might be fueling the fake content machine. Premium accounts, those with the coveted blue checkmarks that can be purchased, are eligible for payouts based on engagement. This creates a powerful financial incentive to post content that goes viral, whether it’s true or not. And AI-generated fakes, especially sensational ones about ongoing conflicts, are practically designed to go viral. AFP’s global network of fact-checkers is constantly battling a torrent of these AI fakes related to the Middle East war, many of them originating from these very premium, monetized accounts on X. They’ve seen videos depicting tearful American soldiers in bombed-out embassies, U.S. troops on their knees surrounded by Iranian flags, and even an entire U.S. Navy fleet supposedly destroyed. The sheer volume of this fabricated content, often mixed with real imagery, is overwhelming, growing much faster than professional fact-checkers can debunk it. To make matters worse, X’s own AI chatbot, Grok, has even been observed incorrectly telling users that some of these AI war visuals were real, inadvertently adding to the confusion instead of clarifying it.
The problem runs deeper than just the monetization incentive for individual users. There are also concerns about what X itself might be profiting from. A recent report from the Tech Transparency Project alleged that X was generating revenue from dozens of premium accounts belonging to Iranian government officials and state-controlled news outlets, which were actively pushing propaganda. This could potentially violate U.S. sanctions. While X did reportedly remove blue checkmarks from some of these accounts after the report surfaced, it highlights a larger issue of platforms struggling to control who benefits from their services, especially when geopolitical conflicts are involved. And even if X’s demonetization policy were perfectly enforced, a huge number of users who post AI content aren’t even part of the revenue-sharing program. These users would still be subject to fact-checks through Community Notes, but even that system has its flaws. A study last year found that over 90% of Community Notes are never actually published, suggesting significant limitations in its ability to effectively counter misinformation.
So, where does that leave us? Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, views X’s policy as a “reasonable countermeasure” against viral disinformation about the war. In theory, he says, it reduces the financial motivation for spreading false content. However, like many others, he emphasizes that “the devil will be in the implementing detail.” It’s incredibly difficult to guarantee that such a policy will be both highly precise – accurately identifying all AI content – and highly effective – catching most of it. Metadata on AI content can easily be removed, making it harder to detect, and as we’ve seen, Community Notes aren’t always published. Ultimately, while X’s effort is a step in the right direction, it feels like we’re constantly playing catch-up in a rapidly evolving landscape of digital deception. The challenge of distinguishing truth from cleverly crafted lies, especially in the heat of conflict, is becoming one of the most defining and unsettling issues of our time.

