Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Disinformation leads Chișinău conference

March 21, 2026

Carmichaels school officials push back against ‘false’ GOP attack mailer

March 21, 2026

Neha Suratran: ‘Hinduism does not convert’: Indian-origin Frisco resident speaks against H-1B hate, misinformation about Indian-Americans

March 21, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Disinformation
Disinformation

Digital platforms are funding disinformation and their own opacity prevents the phenomenon from being fully studied · Maldita.es

News RoomBy News RoomMarch 19, 20267 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

It’s a crazy world out there, and sometimes it feels like we’re drowning in a sea of information, some good, some… not so good. You might think people spread lies or misleading stuff just to mess with us or push their agenda, and sure, that happens. But here’s the kicker: for a growing number of folks, spreading disinformation is a full-time job, a way to make a buck. They’ve figured out that content that’s wild, emotional, or just plain shocking grabs our attention hook, line, and sinker. And guess what? The algorithms that run our favorite social media apps – like YouTube, TikTok, Facebook, Instagram, and even X (formerly Twitter) – actually love this kind of stuff. They see it, they amplify it, and suddenly, these creators are racking up views and, more importantly, cash. These platforms often pay creators a slice of the advertising revenue generated from ads placed in their videos. The catch? Most platforms say they have rules against monetizing disinformation. But organizations like Fundación Maldita.es have been digging around, and what they’ve found is pretty disheartening: at least TikTok and YouTube aren’t exactly playing by their own rules. In essence, they’re inadvertently funding the very falsehoods that can mess with our understanding of the world. It makes you wonder, doesn’t it? How powerful would this misleading content be if there wasn’t a financial reward for it? What’s the real connection between these financial payouts and the way these platforms’ algorithms push such content right into our faces? And are these creators, in their quest for money, actually helping the platforms themselves rake in more cash? These are tough questions to answer because, frustratingly, only the tech giants have the full picture, and they’re not exactly keen on sharing who they’re paying and for what.

Take the issue of climate change, for instance. It’s a topic where facts and science are crucial, yet it’s also a hotbed for disinformation. Fundación Maldita.es did a deep dive into YouTube and found something alarming: 20 channels, boasting a collective 21 million subscribers, were consistently peddling debunked climate myths. And despite YouTube’s own rules against it, these channels were all running ads on their videos, meaning YouTube was, knowingly or unknowingly, sharing its ad profits with them. It’s like pouring gasoline on a fire while claiming you’re trying to put it out. TikTok is no better. Maldita.es investigated accounts that were using AI to create fake videos of protests, all to gain followers and access that sweet monetization money. Many of these same accounts were also sharing misleading videos about climate events. Picture this: one profile, with over 38,000 followers, churned out more than 40 synthetic videos after a Russian snowfall in January 2026 (a future example, but illustrative of the potential). Other accounts posted about floods in Gaza, using similar tactics. And then there’s X, where more than 60 accounts spreading hoaxes about floods in Valencia, hoaxes that Maldita.es had already debunked, proudly displayed their “blue checkmark” – a badge of honor for those subscribed to X Premium, which is required to monetize content on that platform. The same blue checkmark adorned accounts involved in a campaign that, after the same environmental disaster, spammed and replied to other tweets with disinformative or even violent messages. While X seems to have a more hands-off approach to this specific issue, many other platforms do have policies. Meta, for example, states it can demonetize accounts that repeatedly post content flagged as false by fact-checkers. TikTok supposedly doesn’t allow monetization of content that violates community guidelines, including climate disinformation. YouTube outright prohibits monetization of messages that openly contradict scientific consensus on climate change. Yet, time and again, we see evidence that these rules are either poorly enforced or simply don’t cover the full scope of the problem. It’s a bit like having a “no swimming” sign at a beach but then handing out inner tubes to everyone.

These monetization programs, while seemingly a way to support creators, actually pose a significant risk, especially when it comes to sensitive topics. Think about it: if the most shocking, controversial, or attention-grabbing content earns the most money, then these platforms are inadvertently creating an incentive structure where disinformation can thrive. When it comes to climate-related issues, the consequences can be dire. If people are making decisions based on false information, it can endanger public safety and prevent them from accessing reliable data. It chips away at our fundamental right to truthful information. The European Union, with its Digital Services Act (DSA), is trying to tackle this head-on. This law basically tells big online platforms, “Hey, if your design or how you operate helps spread harmful content, you have a responsibility to stop it.” So, if these creator revenue programs are, in effect, rewarding or encouraging the spread of climate disinformation, then platforms must assess that risk and put measures in place to limit its impact. It’s about recognizing that with great power (and profit) comes great responsibility.

But here’s another major hurdle: getting clear data on who’s being paid and for what is like pulling teeth. It’s incredibly difficult to fully investigate this phenomenon because the platforms just don’t share that crucial information. When Fundación Maldita.es does its research, they have to infer monetization based on a channel’s characteristics or publicly available information about how the platforms’ payment systems work. It’s a bit like trying to solve a puzzle with half the pieces missing. Only Meta platforms offer a tiny glimpse, providing minimal details about which accounts are in their program and how many users are registered. Snapchat, to its credit, at least tells us the total amount of money it distributes to creators. Organizations like “What To Fix” have even created comparisons showing just how opaque these platforms are about their monetization practices. This lack of transparency is a huge problem. It makes it nearly impossible for independent groups like Maldita.es to truly understand these financial incentives and how they’re influencing the spread of disinformation on a massive scale. It’s essentially asking us to fight a fire blindfolded.

So, what’s to be done? While the idea of making money from online content isn’t new, it’s clear that today’s digital giants are playing a massive role in amplifying, encouraging, and even funding disinformation, making the problem exponentially worse. First, these platforms need to own up to their legal responsibilities. It’s not enough to just have vague rules about what content they’ll pay for and what they won’t. They need clear, enforceable guidelines, and more importantly, they need the capacity and the will to actually enforce them. If they say they won’t monetize disinformation, then they absolutely shouldn’t. Period. Many platforms already require creators to disclose paid partnerships, so users know when they’re seeing an ad. This same principle should apply to payments coming directly from the platforms themselves. We, as users, should have easy access to this information within the interfaces, and accredited researchers should be granted access to this data to properly assess the risks. Beyond the platforms themselves, the relevant authorities, especially those enforcing the Digital Services Act, need to step up. They must conduct thorough investigations when breaches are suspected and ensure these platforms actually correct their mistakes. This means they need to confirm that platforms have adequate measures in place to prevent their algorithms and monetization programs from being exploited to spread disinformation. They also need to ensure that researchers can access the data on monetization programs without arbitrary denials, which is vital for understanding systemic risks. And finally, they need to make sure platforms are actually upholding their own internal rules, because these rules represent a promise to us, their users. It’s about demanding accountability and making sure that our digital spaces aren’t just echo chambers for financially motivated falsehoods.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Disinformation leads Chișinău conference

SSU exposes large-scale Russian disinformation operation targeting Hungarian community in Zakarpattia

Britain rethinks AI copyright and proposes content labelling

Fighting misinformation and disinformation needs to be a national priority in Canada

Cyabra Uncovers Iran-Driven Disinformation Campaign

Kamala Harris Ambushed Online By Nicki Minaj’s Barbz After Former VP Chides Rapper For Promoting Disinformation

Editors Picks

Carmichaels school officials push back against ‘false’ GOP attack mailer

March 21, 2026

Neha Suratran: ‘Hinduism does not convert’: Indian-origin Frisco resident speaks against H-1B hate, misinformation about Indian-Americans

March 21, 2026

SSU exposes large-scale Russian disinformation operation targeting Hungarian community in Zakarpattia

March 21, 2026

Maine’s Largest Fake Newspaper To Spend $35,000 Of Google’s Money To ‘Fight Misinformation’

March 21, 2026

Britain rethinks AI copyright and proposes content labelling

March 21, 2026

Latest Articles

Iran’s Mojtaba Khamenei warns of ‘false flag’ plots, UAE city on edge | Gulf

March 21, 2026

TikTok: a vehicle for misinformation but also community-building

March 21, 2026

Fighting misinformation and disinformation needs to be a national priority in Canada

March 21, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.