Here’s a humanized summary of the provided text, focusing on making it relatable and understandable, while staying under 2000 words across six paragraphs:
Let’s talk frankly about something that’s become a huge part of our daily lives: social media. We all scroll through it, share on it, and often trust it for information. But what if the very platforms we rely on are actually drowning us in a sea of untruths? A recent eye-opening report, spearheaded by a brilliant group of scientists called Science Feedback, has cast a serious spotlight on this issue, and the results are, frankly, a bit alarming. They took a deep dive into six of the biggest social media players – Facebook, Instagram, LinkedIn, TikTok, X (formerly Twitter), and YouTube – across four different European nations: France, Poland, Slovakia, and Spain. Think of it like a thorough health check-up for these digital giants, and what they found is that one platform, in particular, seems to be struggling the most when it comes to keeping it real.
And the platform that’s causing the biggest headache? It’s TikTok. The study revealed that a staggering one out of every four posts they sampled on TikTok was riddled with misleading information. To put that in perspective, that’s significantly more than any of the other platforms they looked at. It’s like finding out a quarter of the food in your fridge is expired – a serious problem! But TikTok isn’t alone in this muddy mess. Even established players like Facebook, YouTube, and X, which we once might have seen as more reliable, were found to be harboring more fake news than previous studies had indicated. It seems disinformation isn’t just a fleeting trend; it’s a growing problem that’s really digging its heels in across the online world. The researchers cast a wide net, examining false narratives across crucial topics that impact our lives daily: the conflict in Ukraine, our health, the pressing issue of climate change, migration, and the ins and outs of national politics. When they tallied it all up, one area stood out as particularly vulnerable to misinformation: health-related content. This is especially worrying, given how crucial accurate health information is for everyone.
What’s truly chilling about this report is its core message: disinformation isn’t just an accidental glitch in the system. It’s not a few bad apples slipping through the cracks. No, the report starkly warns us that misleading content is actively on the rise, and it’s described not as “incidental,” but as a “persistent, structural feature” of how these platforms are designed and operated. That’s a powerful statement, suggesting that the very architecture of these digital spaces might be inadvertently – or even intentionally – contributing to the spread of untruths. This insightful report wasn’t put together in a vacuum; it was a collaborative effort with reputable fact-checking organizations like Newtral, Demagog SK, Pravda, and Check First. These are the unsung heroes working diligently to verify information, and they’re all signatories to the EU’s Disinformation Code. This code was even integrated into the EU’s main online rulebook, the Digital Services Act (DSA), just last year, showing the serious commitment to tackling this issue at a legislative level. It’s a testament to the growing global awareness and concern about the integrity of information online.
Adding another layer of complexity, the report highlights a significant and rapidly growing threat: AI-generated fakes. We’re not just talking about cleverly doctored images anymore; we’re talking about sophisticated AI churning out convincing but completely false content. On video platforms, this is becoming a real monster. According to the study, roughly a quarter of all the disinformation identified on TikTok (a whopping 24%) was AI-generated, and YouTube wasn’t far behind at about a fifth (19%). This is a game-changer, as AI can create persuasive deepfakes and fake narratives with incredible speed and scale. While many platforms claim to have policies for labeling AI-generated content, the harsh reality is that the vast majority of these synthetic videos the researchers encountered didn’t carry any such labels. Emmanuel Vincent, the brilliant mind who founded Science Feedback and led this study, didn’t hold back in his criticism. He rightfully called out these platforms for their failure to label AI-generated content, especially when they’re simultaneously allowing accounts that spread this content to continue making money. It’s like having a restaurant that boasts about fresh ingredients but knowingly serves food made with synthetic, unlabelled additives, all while collecting payment.
This brings us to a critical policy gap. Currently, the EU’s Digital Services Act (DSA), which imposes stricter rules on the largest platforms, doesn’t actually require them to label AI-generated content. This oversight is becoming increasingly problematic, especially as we head into national elections where AI-fueled deception could wreak havoc. While the EU’s Disinformation Code was officially woven into the DSA in February 2025, the commitments within it remain voluntary. And here’s another snag: not all platforms are even signed up to it. In a move that raised more than a few eyebrows, Elon Musk, for example, pulled X out of the code in 2023. Emmanuel Vincent expressed deep concern about this, pointing out that many US-based tech giants have been actively “rolling back fact-checking programs, cutting research partnerships, and reducing transparency” – all while the problem of online disinformation has only “worsened.” It’s a worrying trend, indicating a potential weakening of defenses against the spread of false information precisely when those defenses are needed most.
The response from some of these platforms has, frankly, been a bit boilerplate. A spokesperson for TikTok, for instance, told Euractiv that the company diligently removes “harmful misinformation” that violates its community guidelines, and they added that over 98% of such posts are removed before they’re even reported. While this sounds good on paper, the sheer volume of unfiltered disinformation highlighted by the Science Feedback report suggests there’s a significant disconnect between policy and practice. The fact that one in four posts on their platform contains misleading content paints a much bleaker picture than their statement suggests. And the issue continues to escalate, as evidenced by a recent development: Poland’s digital minister, Dariusz Standerski, has formally asked the European Commission to investigate TikTok specifically over concerns about AI disinformation. This isn’t just about abstract numbers; it’s about real people, real elections, and the very fabric of our societies being impacted by what we consume online. It underscores the urgent need for more robust regulatory frameworks, greater transparency from platforms, and a collective commitment to fostering a more truthful and trustworthy digital environment for everyone.

