It feels like a new front has opened up in the ongoing conflict in the Middle East, not with bombs and bullets, but with pixels and algorithms. We’re seeing a truly concerning surge of AI-generated videos on platforms like Elon Musk’s X, depicting incredibly realistic but utterly false scenes: American soldiers captured by Iran, Israeli cities in ruins, US embassies in flames. These aren’t clumsy Photoshopped images of old; these are sophisticated “deepfakes” that are so lifelike they’re making it incredibly hard for regular people, and even seasoned experts, to tell what’s real and what’s manipulated. This wave of AI-created visuals is unlike anything we’ve witnessed in previous conflicts, and it’s raising serious questions about the nature of truth in a digital age and the speed at which disinformation can now spread, confusing and sometimes misleading millions. It’s a chilling reminder that the information battleground is now as critical as any physical one.
In an effort to stem this tide, X recently announced a policy pivot, a noteworthy shift for a platform that has faced considerable criticism for becoming a magnet for disinformation, especially since Elon Musk’s acquisition. The new rule states that creators who post AI-generated war videos without properly disclosing them as artificially made will be suspended from X’s revenue-sharing program for 90 days. Subsequent violations would lead to a permanent ban. This move, hailed by some as a necessary step to protect “authentic information” during conflicts, has even received praise from officials like Sarah Rogers of the State Department, who sees it as a valuable complement to X’s crowd-sourced Community Notes. The idea is that by hitting creators where it hurts – their wallets – X hopes to reduce the reach and monetization of inaccurate content. It’s an acknowledgment that the financial incentives for sensational content have played a significant role in fueling this disinformation fire.
However, the efficacy of X’s new policy remains a serious point of contention among disinformation researchers, who are largely skeptical of its immediate impact. Many, like Joe Bodnar of the Institute for Strategic Dialogue, report that their feeds are “still flooded with AI-generated content about the war,” creating a sense that creators haven’t been adequately deterred. Bodnar highlighted a particularly troubling example: a monetized “blue check” X account posted an AI clip showing an Iranian “nuclear-capable” strike on Israel, a post that actually garnered more views than Nikita Bier’s official announcement about the crackdown. This raises crucial questions about the platform’s ability to enforce its own rules and the inherent conflict between a system that rewards engagement and a policy aiming to curb misinformation. The incentive to create shocking, viral content, even if fake, still seems to outweigh the disincentives for many.
The problem is further compounded by the sheer volume and sophisticated nature of the AI fakes circulating. AFP’s global network of fact-checkers is grappling with a relentless stream of these AI-fabricated visuals, often originating from X’s premium accounts with purchasable blue checkmarks. We’re seeing AI videos depicting tearful American soldiers in bombed embassies, captured US troops kneeling beside Iranian flags, and even destroyed US navy fleets. These synthetic images are being mixed with authentic footage from the Middle East, making the task of distinguishing reality from fabrication incredibly difficult and time-consuming. The disturbing reality is that this flood of AI-fabricated visuals is growing faster than professional fact-checkers can debunk them. To make matters worse, X’s own AI chatbot, Grok, has reportedly misinformed users seeking fact-checks, wrongly asserting that some AI visuals from the war were real, inadvertently contributing to the misinformation problem it was designed to combat.
A significant concern highlighted by researchers is X’s business model itself. By allowing premium accounts to earn payouts based on engagement, X has inadvertently “turbocharged” the financial incentive to peddle false or sensational content. This creates a challenging paradox: the very mechanism designed to reward popular content also rewards harmful disinformation. One striking example involved a premium account that posted an AI video of Dubai’s Burj Khalifa skyscraper engulfed in flames. Despite X’s head of product, Nikita Bier, requesting that it be labeled as AI, the post remained online and garnered over two million views without the necessary disclosure. This demonstrates a clear disconnect between policy and practice, and raises questions about the platform’s willingness or ability to enforce its rules against high-engagement content, even when it’s demonstrably misleading.
Ultimately, while X’s new policy is seen by some, like Alexios Mantzarlis of Cornell Tech, as a “reasonable countermeasure” in principle because it aims to reduce the financial incentive for spreading disinformation, the “devil will be in the implementing detail.” Mantzarlis points out that metadata on AI content can easily be removed, and Community Notes, while theoretically helpful, are “relatively rare.” A study by the Digital Democracy Institute of the Americas last year revealed that over 90 percent of X’s Community Notes are never published, highlighting significant limitations in their effectiveness. This suggests that even with a strong policy on paper, the practical challenges of enforcement, the ease with which AI content can be created and disguised, and the inherent limitations of crowd-sourced verification systems mean it’s “unlikely that X will be able to guarantee both high precision and high recall for this policy.” In essence, while the platform is taking steps, the sheer scale and sophistication of AI-powered disinformation may prove to be an ongoing, uphill battle for the foreseeable future.

