The ongoing conflict involving Iran, Israel, and the US has become a startling new frontier for information warfare, where the rise of artificial intelligence (AI) is fundamentally reshaping how we perceive truth. Social media is awash with AI-generated images, deepfake videos, and repurposed video game footage, making it incredibly difficult to distinguish genuine events from fabricated narratives. This deluge of synthetic media not only floods the information space with misinformation but also introduces a sinister new tactic: the weaponization of seemingly technical analyses to discredit authentic evidence. This alarming development was not unforeseen; experts and civil society organizations have long warned about the dangers of releasing powerful generative AI tools without proper safeguards, and this conflict appears to be a chilling realization of those predictions. We are witnessing an information environment under extreme pressure, where AI’s ability to create realistic outputs has advanced dramatically in a short time, making these tools accessible to a broader range of actors with diverse agendas.
In this chaotic backdrop, the struggle for truth is particularly fierce in Iran, where decades of state media control and censorship have already eroded public trust in official sources. The Iranian state readily highlights civilian casualties caused by foreign strikes, yet it possesses no comparable infrastructure to document the thousands of protesters killed by its own security forces. This creates a dangerous paradox where authentic documentation of real harm can be simultaneously weaponized for propaganda or dismissed as fake. The near-total internet shutdown in Iran has further isolated its citizens, severing their connection to real-time information and preventing them from contributing to the evidentiary record of their own suffering. The sheer volume of AI-generated content in this conflict is unprecedented, overwhelming even professional news organizations and making it incredibly difficult to verify facts. As a result, authentic evidence is not just harder to find, but it is actively buried under a mountain of digital noise, leading to dire consequences beyond the digital realm.
One of the most insidious tactics emerging is the fabrication of “technical-looking” analyses to undermine genuine evidence. In a widely publicized incident, “heatmap” visualizations were used to discredit authentic photos taken by photojournalist Erfan Kouchari depicting a strike in Niloofar Square, Tehran. These images, distributed by reputable wire services and published by major international news outlets, were genuine photojournalism. However, a social media user posted what they claimed were “heatmap overlays” and AI analyses from Gemini and ChatGPT, asserting the photos were “very likely all AI-generated.” These seemingly scientific visualizations quickly spread, lending an air of authority to the false claim. Yet, upon closer inspection, experts noted that these “heatmaps” were a sham, not resembling typical forensic analyses and likely fabricated themselves. The legend on one “heatmap” even read “Low / High / Map,” a nonsensical label to anyone familiar with actual forensic tools. Kouchari himself had to share “original” and “edited” versions of his photos to counter the baseless accusations, highlighting the frustration and resignation felt by those whose work is being falsely targeted.
This manipulation tactic is terrifyingly effective because it leverages the illusion of technical authority. Most people, even experienced investigators, can be misled by visuals that appear to be scientific, especially when presented alongside references to well-known AI tools. The underlying truth that independent corroboration already existed, with a second photographer documenting the same scene, was completely overshadowed. The “heatmaps” didn’t need to be factually convincing; they only needed to confirm pre-existing suspicions. Another chilling example involved a photograph from The New York Times depicting crowds in Tehran and released after an announcement of the new Supreme Leader. A social media account, claiming to be an “Empirical Research and Forecasting Institute,” shared what it presented as forensic analyses, including an “Error Level Analysis” (ELA), to declare the image “manufactured” and “fabricated.” This post garnered hundreds of thousands of views, disseminating the false conclusion across Iranian diaspora communities.
Further compounding the deception, the same account also published a “normal map” render, presenting it as definitive proof of fabrication. However, a normal map is a 3D rendering tool, completely irrelevant for forensic analysis of a flat photograph. This is what journalist Craig Silverman aptly calls “forensic cosplay”—technical-looking visuals designed to create an illusion of hidden analysis while actually serving to manufacture authority for a predetermined false claim. The fundamental flaw in this analysis was that it wasn’t even run on the original image, but on a screenshot of an Instagram post, including the platform’s interface. Screenshots introduce compression artifacts that have no bearing on the authenticity of the original image, making the entire analysis moot. Despite The New York Times issuing a public response explaining the misrepresentation, the false conclusion had already taken root in communities where the screenshots had spread most widely.
This crisis represents a dangerous feedback loop: synthetic media erodes trust in real evidence, and then fabricated forensic analysis further undermines confidence in verification itself. The very tools designed to detect manipulation are now being repurposed as instruments of manipulation, sowing doubt and confusion about real events and human suffering. This is not just the work of malicious actors; it is a direct consequence of deploying powerful generative AI technologies without adequate safeguards. While solutions like content credentials, which would embed provenance information with images, exist, their adoption remains limited. The cases we’ve observed are not isolated incidents; they are a preview of a future where corrections struggle to keep pace with false claims, and where authentication tools, even if available, are not effectively integrated into the spaces where disputes actually unfold. Ultimately, when trust in evidence collapses, the greatest casualty is not just the truth online, but accountability for real-world harm.

