Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Delhi BJP alleges misinformation against Pink Cards issued by govt to women

March 31, 2026

Universities in the occupied territories of Ukraine have been turned into a tool for recruiting students into the Russian army – NSDC Center for Countering Disinformation

March 31, 2026

Mayor of Bath resigns after posts suggesting London ambulance fires were Israeli ‘false flag’ | UK news

March 31, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Disinformation
Disinformation

Memeification and digital slop: AI and the fog of war

News RoomBy News RoomMarch 30, 20269 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Prepare to be amazed, because while OpenAI was pulling the plug on its video-making app, Sora, something wild was brewing. Iran, seemingly out of nowhere, jumped headfirst into the world of synthetic propaganda, and guess what? They’re winning the global meme war, big time. It might sound like a weird coincidence, but it sheds a powerful light on the sneaky, asymmetric ways modern media operates. What we’re seeing is a masterclass in how to weaponize culture and online communities, all thanks to powerful tools given to us by American tech companies – tools that essentially turn everyone into a creator of their own “reality distortion field,” like personal drones for your news feed.

Imagine this: Donald Trump was frantically trying to calm down the oil markets, and at the same time, a flood of meticulously crafted videos started hitting X (formerly Twitter). These weren’t just random clips; they were carefully designed to resonate with American, regional, and international audiences, spewed out by embassy accounts, Russia Today, and even disgruntled “Maga” influencers. And here’s the kicker – people online are saying these videos are good. Some tap into the very specific, often eccentric, online language of the American right. Others cleverly remix beloved Hollywood characters and imagery, in a way that Disney had hoped to control with its now-defunct partnership with OpenAI. Then there are the more overtly religious ones, portraying figures like Donald Trump and Benjamin Netanyahu as worshippers of Baal, a demon god featured in both the Quran and the Hebrew Bible. The Lego Movie, surprisingly, is a rich source for these creators, as are trendy TikTok formats and those idealized AI-generated figures that were a staple of Trump-era meme makers. And importantly, these aren’t just fake war videos; they openly celebrate their artificiality. Some are sentimental, some triumphant, and many overflow with the kind of gleeful, mischievous humor you’d find among teenagers on Discord.

For a long time, experts have been warning us that these advanced AI tools would chip away at the credibility of visual evidence, making the damage caused by older, cruder forms of fakery – like Photoshop or doctored gaming clips passed off as combat footage – even worse and faster. Well, we’re definitely there, and have been for a while. Russia, for instance, has been a grandmaster of this game, both in Ukraine and in its ongoing influence campaigns around the globe. But others are learning fast. Remember last year, when India and Pakistan had a short aerial scuffle? Social media was absolutely swamped with misinformation, drowning out traditional news coverage. And more recently, Israel’s devastating actions in Gaza were accompanied by a relentless and overwhelming storm of visually compelling misinformation, propaganda, and outright official lies. This isn’t just a fleeting trend; it’s a new norm. On March 28th, Israel conducted a targeted strike in Southern Lebanon, killing three journalists. They claimed, without any proof, that one of them, Ali Shoaib, was a member of Hezbollah. To try and reinforce this narrative, they later distributed a photograph of him in military fatigues, only to sheepishly admit to Fox News that they’d actually photoshopped the uniform in because no such picture existed.

Meanwhile, back in the US, during the Trump administration’s domestic battles against immigrants and political opponents, we saw an alarming shift in what we expect from official communication. The idea that it should be based on facts seemed to vanish. A prime example? The White House shared altered footage of a prominent Minneapolis activist, Nekima Levy Armstrong, being arrested in January. In the version posted by the official White House account, a handcuffed Levy Armstrong was shown sobbing, her skin noticeably darkened. In reality, she had faced arrest calmly and with dignity. When reporters pressed them on this obvious manipulation, Deputy White House Communications Director Kaelan Dorr bluntly stated, “Enforcement of the law will continue. The memes will continue.” This response perfectly encapsulates the administration’s willingness to erase the line between a meme and factual reality, using AI to push their preferred narrative as the absolute truth.

The ironic twist for the White House and its allies is that their own decisions – in tech policy, how they communicate officially, and their approach to press freedom – have inadvertently leveled the playing field in this information war. Tehran’s media strategists, it seems, grasp this new landscape far better than the American establishment, despite the latter’s deep immersion in online culture. Iranian propagandists understand that the value of visual information online has, frankly, been completely trashed. They’ve dealt with this reality for a long time, and have undoubtedly used similar tactics themselves in regional fights for narrative control. Their key insight is this: while it’s still cheap and easy to create and spread fake content, the returns on efforts like “coordinated inauthentic action” – what researchers call organized disinformation campaigns – are shrinking. They still do it, but it’s no longer where the real action is.

Think about it: Sam Altman, Elon Musk, and Mark Zuckerberg, along with figures like Peter Thiel, Alex Karp, JD Vance, and Donald Trump, have, in a very real sense, collectively created this moment. At their urging, the US has essentially given away its dominant position in valuable, sophisticated information and cultural assets. In exchange, we now live in a media economy overflowing with cheap, abundant content. For the past decade, tech leaders and conservative politicians have consistently worked to undermine the credibility of American journalism and restrict its freedoms. They’ve villainized journalists as “enemies of the people,” seized control of major broadcasting and cultural industries through shady deals, and launched an all-out assault on both press freedom and journalistic standards – two things that once made US news outlets the envy of the world. And to make matters worse, the financial collapse of traditional media companies, fueled by the advertising duopoly of Google and Meta, only deepened the wounds. Jeff Bezos’s Washington Post, for instance, shut down its Middle East bureaus just days before the war began.

Meanwhile, the relentless stream of lies from agency podiums and even the Oval Office has made figures like Karoline Leavitt almost indistinguishable from “Baghdad Bob,” Iraq’s information minister in 2003, whose surreal, truth-dodging press conferences during the US-led invasion became a global mockery. And the controversial “DOGEing” of both the nominally independent Voice of America and the State Department’s Global Engagement Center has left the administration with practically no capacity for broadcast or digital counter-propaganda. When no one can reliably tell the actual truth, we’re left with the AI equivalent of 19th-century editorial cartoons, but produced on an industrial scale and distributed globally. America finds itself at a distinct disadvantage in this information war, especially when it’s facing moral, political, and legal challenges. If anything, Iran, with its blend of social repression and an incredibly rich literary culture, vibrant film scene, and robust advertising market, brings serious capabilities to this fight.

Of course, the erosion of information power was already well underway during the first Trump administration and continued into Joe Biden’s term, in ways that are deeply intertwined with a broader democratic decline. The “trust and safety” frameworks adopted by big platform companies were initially designed – often implicitly – to maintain information authority and ensure it worked in ways that generally supported democracy. After the horrific failures surrounding the Rohingya genocide (which human rights groups and UN investigators blamed Facebook for facilitating) and the widespread fears about manipulation in the 2016 US election, the commercial and political health of Twitter, Facebook, and YouTube was clearly at risk. Tech companies, governments, researchers, and human rights experts began to develop rules and norms for content moderation based on existing standards, tools to detect coordinated inauthentic behavior, and frameworks for crisis response.

The dedicated community of practitioners and institutions that emerged to combat this “flesh-eating virus” attacking the body politic were essentially working with band-aids in a battlefield hospital even before COVID-19, a coordinated attack from the right, and the second Trump victory hit them. Yet, they did manage to impose some critical limits. That entire project, sadly, now lies in ruins. The Stanford Internet Observatory has been shut down. Trust and Safety teams at Meta and X have been disbanded. The national security arm of this effort, once centered around the State Department, is gone, and private funding for countering misinformation has largely dried up. So, where are the “hyperscalers,” the AI titans whose powerful tools are being so effectively used in all of this?

The few remaining trust and safety individuals working at OpenAI dutifully release reports every few months. They detail how they foiled attempts to use ChatGPT for a Chinese influence campaign targeting Sanae Takaichi, the Japanese prime minister, and expose a Russian content mill feeding African newspapers. Sasha Baker, OpenAI’s Head of National Security policy, shared a “Pro-tip for governments” on LinkedIn after a February report: “Please don’t use our products to spread lies online.” Interestingly, in Sam Altman’s vision of “democratic AI,” governments apparently don’t include the United States. OpenAI hasn’t mentioned a single US ally – let alone the US administration itself – in these reports. OpenAI has hired numerous former Clinton, Obama, and Biden officials, and in their work, a strange, diluted version of the old national security approach to information integrity persists, alongside their efforts to sell products to the Pentagon. The company’s leaders seem to treat these issues as either a minor complement to their broader messaging about Western AI or a trivial side note to the much larger questions of AI risk, which are handled at a much higher, almost ethereal, organizational level, similar to how Anthropic operates.

Perhaps the biggest lesson here is that you can’t truly shut down Sora, or put the genie of AI-generated video back in its bottle. If you choose to wage an illegal “war of choice” after abandoning the hard-won advantage of a robust, democratic information environment, high-tech weaponry simply won’t make up for that deficit. On the contrary, you’ll only increase the risk of both tactical failures and strategic geopolitical defeat. When that happens, and in some ways, it already has, those who initiated this war, and their enablers in Silicon Valley, will have only themselves to blame.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Universities in the occupied territories of Ukraine have been turned into a tool for recruiting students into the Russian army – NSDC Center for Countering Disinformation

Ex-VP Atiku Raises Alarm Over ‘Coordinated Disinformation’ Against ADC

Australian government must fight climate disinformation, says Senate committee

How Pakistan-Linked Accounts Are Running a Disinformation Campaign Against India

Poland launches Armenian-language news service to “counter disinformation”

Center against disinfo denies Türkiye planned incursion to Lebanon

Editors Picks

Universities in the occupied territories of Ukraine have been turned into a tool for recruiting students into the Russian army – NSDC Center for Countering Disinformation

March 31, 2026

Mayor of Bath resigns after posts suggesting London ambulance fires were Israeli ‘false flag’ | UK news

March 31, 2026

Ex-VP Atiku Raises Alarm Over ‘Coordinated Disinformation’ Against ADC

March 31, 2026

WB BJP Shares Clipped Video of CM Mamata Banerjee With False Claim

March 31, 2026

Viral Image Of PM Modi Meeting Sonia Gandhi In Hospital Is AI-Generated

March 31, 2026

Latest Articles

Media Capture, Misinformation, and “Noise”

March 31, 2026

Australian government must fight climate disinformation, says Senate committee

March 31, 2026

Fox News Host Makes Stunningly False Claim About Trump, Leaves Colleague Shocked

March 31, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.