Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

POLICE: No reports of Greenville downtown shooting, warns of misinformation – WITN

March 29, 2026

STRATCOM Summit Calls for ‘Truth-Based’ Global Order to Tackle Disinformation

March 29, 2026

Ann Coulter Rips Into Fox News for Iran War Coverage, Compares to False Rigged 2020 Election Reporting

March 29, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Disinformation
Disinformation

Investigating Disinformation in the Age of AI – Global Investigative Journalism Network

News RoomBy News RoomMarch 29, 20268 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Okay, let’s dive into the fascinating and often frustrating world of disinformation as seen through the eyes of investigative journalists. Imagine a group of really smart, dedicated people from all over the globe, huddled together before a big conference, all focused on one huge problem: how technology and AI are shaping what we believe, especially when it comes to truth and lies. This gathering, a pre-conference event for GIJC25, brought together about a hundred journalists, editors, tech wizards, and researchers from nearly 50 countries. Their main goal? To figure out the most urgent challenges and opportunities in using journalism to expose how tech influences our world, particularly when it comes to the shadowy realm of disinformation.

For more than ten years, “disinformation” hasn’t just been a hot topic for journalists to investigate; it’s been like a tidal wave crashing over the entire news industry, changing everything. At first, the response was pretty straightforward: “See a lie, debunk it, move on.” It was like playing Whac-A-Mole – you’d see a false story, smack it down with facts, and then wait for the next one. This approach made sense in the beginning, when the internet was younger and the spread of misinformation was less overwhelming. But now? It’s completely unsustainable. Think about it: the sheer volume of misleading stuff online, constantly being amplified by clever algorithms designed to keep us hooked, means that simply fact-checking individual falsehoods is no longer enough. It’s like trying to empty an overflowing bathtub with a teaspoon when the tap is running full blast. The entire landscape of information has been fundamentally reshaped by this deluge of misleading and manipulative content, forcing journalists to rethink their strategies completely.

The pre-conference session squarely faced this new reality, pulling insights from powerful investigations in both Europe and Asia. The speakers painted a vivid picture of disinformation not as a series of isolated lies, but as a sprawling, complex ecosystem. It’s an ecosystem powered by the very platforms we use daily, driven by the lure of economic profit, and actively designed and deployed by both governments and other powerful groups. The implications of this are huge, directly impacting the health of our democracies and fundamental civil rights. A key takeaway from all the discussions was the sheer, mind-boggling volume of disinformation circulating today. Journalists highlighted how this overwhelming scale has dramatically altered their daily work. Newsrooms constantly have to make tough, rapid-fire decisions, knowing they can’t possibly react to every single false claim. And even worse, if they react to the wrong things, they risk inadvertently giving those falsehoods even more oxygen, making them spread further.

Jyoti Dwivedi, a journalist from India Today, shared a powerful example from the India-Pakistan conflict in May 2025 (a hypothetical scenario, but illustrative). She described how disinformation erupted almost instantly, a chaotic mix of propaganda, old videos presented as new evidence, and emotionally charged stories. Imagine seeing old footage of explosions from years ago, completely unrelated to the current conflict, suddenly being shared as proof of fresh attacks – sometimes even simultaneously claimed by both sides as evidence of the other’s aggression! In one particularly chilling example, people were asking chatbots to verify the same video and getting completely different answers, showing how automated systems, designed to be helpful, could actually reinforce people’s existing biases during times of crisis. Faced with this “insane” amount of misleading content, Dwivedi’s newsroom realized they couldn’t possibly debunk everything. So, they changed tactics. Instead of going after every single false post, they started tracking trends – looking at the bigger picture of how disinformation was evolving. They also explicitly warned their audiences not to trust chatbots for verification during conflicts, acknowledging the limitations of AI. And instead of lengthy articles, they started publishing short, easily shareable “fact-check postcards,” designed to cut through the noise quickly. The lesson was incredibly clear: in fast-moving “information wars,” the real investigative value isn’t in disproving every single lie, but in understanding and exposing the larger patterns, the tactics being used, and the systemic weaknesses that allow disinformation to flourish. The image she shared, showing xAI’s Grok chatbot falsely suggesting an old fire video from Bangladesh was an Indian missile attack on Pakistan, perfectly illustrated this challenge.

This brings us to the uncomfortable truth: disinformation doesn’t just spread on its own – it’s actively amplified. It’s pumped up by the very algorithmic systems that social media platforms use, systems that are optimized for engagement, designed to provoke outrage, and engineered for virality. Craig Silverman, co-founder of Indicator, put it succinctly: manipulated content often “wins” because it’s crafted to trigger strong emotional responses. And when algorithms see strong emotions, they take it as a sign to push that content even further into people’s feeds. The speakers at the conference stressed a critical point: journalists need to stop seeing platforms merely as neutral distribution channels and start scrutinizing them as incredibly powerful brokers of information. They challenged journalists to ask tough questions: How do recommendation systems favor certain narratives over others? How do payment structures and monetization schemes actually reward outrage and the spread of disinformation? And how do failures in content moderation disproportionately harm vulnerable communities? These questions are especially urgent in parts of the world where these platforms have effectively become the main public square, with little to no governmental oversight. Disturbingly, the discussion also highlighted how media coverage that only focuses on sensational or shocking content after violence has occurred can, ironically, end up giving even more airtime to the very narratives that extremists want to spread, inadvertently helping them achieve their goals.

Disinformation campaigns are also deeply intertwined with online hate and harassment, acting as a dangerous gateway. Luis Assardo, a journalist and researcher who’s written extensively on trolling, explained how hate speech serves as a precursor to radicalization. It often uses memes, coded language, and “algo-speak” (ways of phrasing things to slip past moderation systems) to draw people in. We’ve seen this pattern repeat itself many times: from gaming communities embroiled in harassment like Gamergate, to coordinated attacks against journalists like Maria Ressa, or the long-term impact of the hateful Christchurch manifesto. When media outlets focus solely on the shocking details of violence after it happens, they often end up amplifying the very hateful narratives that the perpetrators want disseminated. Assardo urged journalists to fundamentally shift their approach. Instead of simply reproducing the hate speech itself, he suggested
journalists expose the underlying behaviors and tactics of those spreading it. He also pushed for a move away from focusing on individual perpetrators to uncovering the wider networks and ecosystems that enable them. And crucially, he advocated for a switch from sensational coverage to contextual, harm-minimizing reporting – journalism that informs without inadvertently empowering the very forces it seeks to expose.

But who is really behind all this? Anuška Delić from the Slovenian investigative journalism site Oštro pointed out that to truly understand disinformation, we need to “follow the money.” While much of the reporting has rightly focused on the narratives and online networks, far less attention has been paid to the economic incentives, the funding sources, and the complex subcontracting chains that actually keep these massive disinformation operations afloat. Investigations like “Story Killers,” a massive collaboration coordinated by Forbidden Stories, and El CLIP’s exposé “Digital Mercenaries,” have peeled back the layers. They’ve shown how professional “mercenaries of disinformation” operate as private contractors, hired to design and run influence campaigns that span across borders. By focusing on these hidden actors, journalists can literally follow the money, identify who is commissioning and profiting from disinformation, and reveal how these campaigns become outsourced, industrialized, and ultimately weaponized – sometimes even against the very accountability journalism that seeks to expose them.

Finally, the discussion brought a crucial perspective: the dynamics of disinformation aren’t the same everywhere, especially not in the Global South. As Dwivedi and others highlighted, structural disadvantages like limited funding for newsrooms, a lack of accountability from powerful tech platforms, linguistic blind spots (where content in non-dominant languages often goes unmoderated), and fragile democratic institutions can make the problem far worse. In these regions, conflicts or global crises often “import” disinformation, where it gets reshaped and recontextualized along existing local social, religious, or political lines. Scammers also jump on these moments, using manipulated content to solicit donations or spread unnecessary panic. Recognizing these systemic issues, the gathering identified several key priorities for investigative journalists: investigating influence campaigns as full-blown systems rather than isolated incidents; prioritizing public-interest harm when deciding which falsehoods to debunk; meticulously following the money trails behind disinformation infrastructures; rigorously examining the incentives embedded within platform design and algorithms; and, above all, protecting audiences by minimizing the inadvertent amplification of harmful narratives. This collective effort, led in part by experienced investigative journalists like Sandrine Rigaud who has dedicated her career to such global collaborations, underscores the urgent need for a more sophisticated, systemic approach to tackling the relentless tide of disinformation in our interconnected world.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

STRATCOM Summit Calls for ‘Truth-Based’ Global Order to Tackle Disinformation

Lies in high definition – The Business & Financial Times

Disinformation Campaign Links Pakistan to Iran Oil Betrayal – Pakistan Today

Disinformation and the Escalation of Protests in Angola

Foreign Minister, Head of Communications of Turkiye’s Presidency Discuss Countering Disinformation

Avoid disinformation this election season

Editors Picks

STRATCOM Summit Calls for ‘Truth-Based’ Global Order to Tackle Disinformation

March 29, 2026

Ann Coulter Rips Into Fox News for Iran War Coverage, Compares to False Rigged 2020 Election Reporting

March 29, 2026

Investigating Disinformation in the Age of AI – Global Investigative Journalism Network

March 29, 2026

Could The Greek Sophists Be The Fathers of Misinformation?

March 29, 2026

Lies in high definition – The Business & Financial Times

March 29, 2026

Latest Articles

Translations of historical works into regional languages are key to counter misinformation, says historian Ruchika Sharma

March 29, 2026

Pinarayi is ‘Nunarayi’ for false statements: Satheesan | Kochi News

March 29, 2026

Iraq warns Al-Jazeera English over misinformation

March 29, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.