Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

California elections officials urge early mail-in voting, warn about 'misinformation' – The Daily Gazette

May 7, 2026

Employment Ministry launches communication strategy to tackle misinformation

May 7, 2026

False News Alert – PS Employment not in custody

May 7, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

AI fakes of Middle East war flood X feeds despite new policy

News RoomBy News RoomMarch 15, 2026Updated:May 5, 20265 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

It feels like we’re living in a world where things that aren’t real can look incredibly, shockingly real. We’re talking about AI-generated videos, the kind that are popping up on platforms like Elon Musk’s X (formerly Twitter). Imagine seeing a video that shows American soldiers captured by Iran, or a once-bustling Israeli city reduced to rubble, or even U.S. embassies engulfed in flames – and it all looks so convincing that your gut reaction is pure alarm. This isn’t just about a few doctored photos anymore; it’s a flood of lifelike deepfakes, hitting us at a time when the Middle East is already in such turmoil. It’s a stark reminder of how difficult it’s becoming to separate fact from fiction, especially with the sheer volume of these AI-created images and videos. Researchers are saying this is unlike anything they’ve seen in previous conflicts, and it’s leaving many people scrolling through their feeds utterly confused about what’s actually happening in the world.

The problem has become so significant that X, under Elon Musk, felt pressured to act. They recently announced a new policy: if you’re a creator in their revenue-sharing program and you post AI-generated war videos without making it clear they’re artificial, you’ll be suspended from getting paid for 90 days. Repeat offenders, according to X’s head of product Nikita Bier, will lose their monetization privileges permanently. On the surface, this sounds like a positive step. X has been heavily criticized in the past for becoming something of a wild west for misinformation, especially since Musk took over. So, for a platform notorious for its hands-off approach, this policy shift felt significant. Even a senior State Department official, Sarah Rogers, praised it, seeing it as a good complement to X’s existing Community Notes system, which allows users to fact-check posts collaboratively. The idea is that by making it harder to profit from fake content, there’s less incentive to create and spread it.

However, anyone who works to counter disinformation is looking at this with a healthy dose of skepticism. Joe Bodnar from the Institute for Strategic Dialogue, for instance, points out that despite the new policy, his feeds are still absolutely swimming in AI-generated content about the war. He told Agence France-Presse (AFP) that it doesn’t seem like the creators of these misleading videos and images have been put off at all. He even highlighted a premium, “blue check” X account – the kind that’s eligible for monetization – that shared an AI clip of an Iranian “nuclear-capable” strike on Israel. What’s particularly jarring is that this fake video actually racked up more views than Nikita Bier’s official announcement about cracking down on AI content. It really makes you wonder if the policy is having any real impact on the ground, or if it’s just a drop in the ocean compared to the overwhelming tide of deception.

Part of the issue seems to stem from X’s own business model, which, ironically, might be fueling the fake content machine. Premium accounts, those with the coveted blue checkmarks that can be purchased, are eligible for payouts based on engagement. This creates a powerful financial incentive to post content that goes viral, whether it’s true or not. And AI-generated fakes, especially sensational ones about ongoing conflicts, are practically designed to go viral. AFP’s global network of fact-checkers is constantly battling a torrent of these AI fakes related to the Middle East war, many of them originating from these very premium, monetized accounts on X. They’ve seen videos depicting tearful American soldiers in bombed-out embassies, U.S. troops on their knees surrounded by Iranian flags, and even an entire U.S. Navy fleet supposedly destroyed. The sheer volume of this fabricated content, often mixed with real imagery, is overwhelming, growing much faster than professional fact-checkers can debunk it. To make matters worse, X’s own AI chatbot, Grok, has even been observed incorrectly telling users that some of these AI war visuals were real, inadvertently adding to the confusion instead of clarifying it.

The problem runs deeper than just the monetization incentive for individual users. There are also concerns about what X itself might be profiting from. A recent report from the Tech Transparency Project alleged that X was generating revenue from dozens of premium accounts belonging to Iranian government officials and state-controlled news outlets, which were actively pushing propaganda. This could potentially violate U.S. sanctions. While X did reportedly remove blue checkmarks from some of these accounts after the report surfaced, it highlights a larger issue of platforms struggling to control who benefits from their services, especially when geopolitical conflicts are involved. And even if X’s demonetization policy were perfectly enforced, a huge number of users who post AI content aren’t even part of the revenue-sharing program. These users would still be subject to fact-checks through Community Notes, but even that system has its flaws. A study last year found that over 90% of Community Notes are never actually published, suggesting significant limitations in its ability to effectively counter misinformation.

So, where does that leave us? Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, views X’s policy as a “reasonable countermeasure” against viral disinformation about the war. In theory, he says, it reduces the financial motivation for spreading false content. However, like many others, he emphasizes that “the devil will be in the implementing detail.” It’s incredibly difficult to guarantee that such a policy will be both highly precise – accurately identifying all AI content – and highly effective – catching most of it. Metadata on AI content can easily be removed, making it harder to detect, and as we’ve seen, Community Notes aren’t always published. Ultimately, while X’s effort is a step in the right direction, it feels like we’re constantly playing catch-up in a rapidly evolving landscape of digital deception. The challenge of distinguishing truth from cleverly crafted lies, especially in the heat of conflict, is becoming one of the most defining and unsettling issues of our time.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Reform candidate ‘accidentally’ shares fake AI video of a Muslim man

How to survive the information crisis: ‘We once talked about fake news – now reality itself feels fake’ | Media

Report reveals: Fake AI ‘rabbis’ spread antisemitism on TikTok

‘Think before sharing’: Giorgia Meloni issues warning as fake lingerie images spread online

‘Think before sharing,’ Giorgia Meloni says as AI-made lingerie image of her goes viral | Giorgia Meloni

AI‑generated Met Gala looks are back: Here’s how to tell the real from the fake

Editors Picks

Employment Ministry launches communication strategy to tackle misinformation

May 7, 2026

False News Alert – PS Employment not in custody

May 7, 2026

SA on high alert, but beware of misinformation campaign on anti-immigration debate

May 7, 2026

Community invited to explore truth and misinformation

May 7, 2026

Inside Housing – News – Misinformation risks undermining real causes of housing crisis

May 7, 2026

Latest Articles

Join our webinar “Debunking Disinformation in Geopolitics and Climate Science with AI Solutions” – EUalive

May 7, 2026

Vietnam proposes national center to combat fake news

May 7, 2026

Nigeria’s media independence tested as misinformation surges

May 7, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.