Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Disinformation leads Chișinău conference

March 21, 2026

Carmichaels school officials push back against ‘false’ GOP attack mailer

March 21, 2026

Neha Suratran: ‘Hinduism does not convert’: Indian-origin Frisco resident speaks against H-1B hate, misinformation about Indian-Americans

March 21, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

What our second measurement says about misinformation on major platforms in Europe

News RoomBy News RoomMarch 19, 20267 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Alright, let’s dive into this report from Science Feedback and their pals, breaking down what they found after looking at how misinformation is messing with our online world. Think of it like this: they’re playing detective, checking out whether the big social media platforms, the ones we all spend so much time on, are actually doing enough to stop the spread of fake news.

The Big Picture: We’re Drowning in More Misinformation Than Ever

Imagine walking through a bustling marketplace, and every fourth person you meet is trying to sell you something completely false or misleading. That’s pretty much the reality Science Feedback is painting for TikTok, where a quarter of everything you see is questionable. And it’s getting worse! Last time they checked, it was around 20%, now it’s 25%. YouTube isn’t far behind, jumping from about 8.5% to a worrying 12%. What’s truly shocking is that on TikTok, X (formerly Twitter), and YouTube, you’re now more likely to bump into problematic content than credible, trustworthy stuff. LinkedIn, bless its heart, remains an oasis in this misinformation desert, with only 1% of its content raising red flags. This isn’t just a blip; it’s a consistent problem that has either stayed bad or gotten even worse.

Now, let’s talk about the specific types of misinformation that are running rampant. Health misinformation continues to be the biggest culprit, making up almost half of all the misleading posts. It seems like everyone’s got some magical cure or outrageous health claim to share. Following that, discussions around the Russia-Ukraine war and national politics are also heavily polluted with false narratives. It’s like these crucial topics, where accurate information is vital, are being targeted the most by those spreading falsehoods.

The “Misinformation Premium”: Why Lies Get More Likes

This part of the report is frankly infuriating. It seems that accounts spreading misinformation aren’t just getting by; they’re thriving, often getting far more engagement than accounts that share verifiable, credible information. This phenomenon, which they call the “misinformation premium,” is like a popularity contest where the loudest, most scandalous lie wins. On YouTube, an account spewing nonsense gets about eleven times more interactions per post than a reliable one of similar size. X (Twitter) saw this premium skyrocket from four times to a whopping ten times! Facebook and Instagram also show this disturbing trend, though to a slightly lesser extent.

Why does this happen? The report suggests it’s not just a fluke; it’s a “structural feature” of how these platforms work. Their algorithms, designed to keep us scrolling and engaging, seem to inadvertently boost sensational and often false content. It’s a vicious cycle: misinformation generates more buzz, the algorithms pick up on that buzz, and then they show it to more people, who then engage with it, and so on. Again, LinkedIn stands out as the only platform where this unfair advantage doesn’t seem to exist. This persistent “premium” over two separate measurements indicates it’s not a temporary glitch, but a deeply embedded issue in the very design of these platforms, making it incredibly hard for truth to catch up.

Monetization: Platforms Are Accidentally (or Not-So-Accidentally) Funding Misinformation

Here’s where it gets really murky. The report points out that despite promises to demonetize accounts that spread misinformation, platforms are often failing to do so effectively. It’s like finding out that the fire department, while trying to put out a fire, is also unknowingly selling matches to arsonists. On YouTube, a staggering 81% of eligible low-credibility channels appear to be monetized. Think about that: most of the channels pushing out false narratives are actually earning money from it, much like reputable channels. While Facebook shows a wider gap (22% of eligible low-credibility accounts are monetized compared to 51% for high-credibility ones), the fact that over one in five accounts spreading lies are making money is pretty damning.

The biggest hurdle in getting a full picture of this problem is the “platform opacity” – basically, these platforms aren’t transparent enough with their data. Science Feedback tried to get more information, but their requests were met with silence. This lack of transparency means we can only infer how much money is flowing to these problematic accounts, but even with limited data, the conclusion is bleak: platforms are financially supporting the very content they claim to be fighting. This makes you wonder about the sincerity of their demonetization policies and whether they’re truly committed to cutting off the financial pipeline for misinformation.

The Rise of AI-Generated Lies: An Unlabeled, New Frontier of Deception

This is a brand-new, terrifying element in the misinformation landscape: AI-generated content. Imagine an entire article, image, or video created by artificial intelligence, specifically designed to look real but spread false information, and often without any label to tell you it’s AI-generated. The report found that one in four misinformation posts on TikTok (24%) and nearly one in five on YouTube (19%) contain AI-generated elements. These aren’t minor issues either; this AI-generated misinformation racked up around 34 million views, with TikTok alone accounting for 69% of these.

What’s even scarier is that most of this AI-generated content isn’t labeled. Only 16.5% of it carries any visible sign that it was created by AI. On Facebook, it was a dismal 1.8%, and YouTube was even worse at 0.9%. This means users are largely unaware they are consuming synthetic content, making it incredibly difficult to discern truth from fiction. Health misinformation, once again, dominates this AI-generated category, with a disturbing trend of realistic-looking, AI-generated doctors or impersonations of real doctors spreading false health claims. This is a rapidly growing, poorly managed threat that could make the fight against misinformation exponentially harder, as distinguishing human-generated from AI-generated content becomes nearly impossible for the average user.

Audience Growth: Lies Often Grow Faster

When it comes to building an audience, it seems low-credibility accounts aren’t always at a disadvantage. For most platforms, there wasn’t a significant difference in how fast high- and low-credibility accounts gained followers. However, X (Twitter) once again proves to be an outlier, and not in a good way. On X, accounts that spread misinformation are growing their audiences at about 3.5 times the rate of credible accounts. This means that, despite all the talk about promoting reputable sources, on at least one major platform, the purveyors of falsehoods are expanding their reach much faster.

This audience growth, combined with the misinformation premium (more interactions per post), creates a powerful engine for spreading deceptive content. It’s not just that misinformation gets more engagement; it’s that the accounts spreading it are also acquiring new followers at an alarming rate on platforms like X. This suggests that the algorithmic push for engagement on some platforms is also contributing to the expansion of these problematic accounts, making it harder for users to filter through the noise and find reliable information.

Two Waves, One Clear (and Concerning) Message

The real strength of this report lies in its consistency. This isn’t a one-off finding; it’s the second time these measurements have been taken, and the results are largely the same, or in some cases, even worse. This consistent pattern across two independent measurements means that what Science Feedback and its partners are observing isn’t just random noise or a temporary blip. These are “structural features” of how these platforms operate, ingrained deep within their systems and algorithms.

This consistency validates their methodology, showing that these indicators are robust enough for long-term monitoring. And why does this matter so much now? Because these measurements are designed to feed directly into new regulations, specifically the Digital Services Act (DSA) framework, which came into effect in July 2025. These “Structural Indicators” can now serve as formal benchmarks to audit platforms and ensure they’re complying with the rules. The science is clear, the data is consistent, and the tools are ready. The final piece of the puzzle, as the report eloquently puts it, is the “political will” to actually use these findings to hold platforms accountable and push for meaningful change. It’s a call to action, urging policymakers and platform operators to finally address these deeply embedded issues before our online spaces become completely overwhelmed by an unmanageable tide of misinformation.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Neha Suratran: ‘Hinduism does not convert’: Indian-origin Frisco resident speaks against H-1B hate, misinformation about Indian-Americans

Maine’s Largest Fake Newspaper To Spend $35,000 Of Google’s Money To ‘Fight Misinformation’

TikTok: a vehicle for misinformation but also community-building

How Iran is fighting on multiple fronts, from missiles to misinformation – Moneycontrol.com

Misinformation surrounding Prop 4: Can misleading voters carry legal consequences?

Vaccines facing misinformation spike: WHO experts – ABS-CBN

Editors Picks

Carmichaels school officials push back against ‘false’ GOP attack mailer

March 21, 2026

Neha Suratran: ‘Hinduism does not convert’: Indian-origin Frisco resident speaks against H-1B hate, misinformation about Indian-Americans

March 21, 2026

SSU exposes large-scale Russian disinformation operation targeting Hungarian community in Zakarpattia

March 21, 2026

Maine’s Largest Fake Newspaper To Spend $35,000 Of Google’s Money To ‘Fight Misinformation’

March 21, 2026

Britain rethinks AI copyright and proposes content labelling

March 21, 2026

Latest Articles

Iran’s Mojtaba Khamenei warns of ‘false flag’ plots, UAE city on edge | Gulf

March 21, 2026

TikTok: a vehicle for misinformation but also community-building

March 21, 2026

Fighting misinformation and disinformation needs to be a national priority in Canada

March 21, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.