Alright, let’s “humanize” and consolidate this information about the fight against online misinformation into a coherent narrative. Imagine we’re having a conversation about how tricky it is to tell fact from fiction on the internet these days, especially when it comes to our health.
In our hyper-connected world, where a single click can instantly spread a thought or an idea to millions, we’ve found ourselves grappling with a serious challenge: misinformation. It’s not just harmless gossip; sometimes, it’s about our health, and it can be downright dangerous. Think about it: a seemingly credible image, perhaps even an AI-generated deepfake of a famous doctor, pops up on your feed, promoting some “miracle cure” that’s completely bogus. This isn’t science fiction anymore. With the rise of generative AI, the sheer volume and speed of these fake health claims are exploding. Organizations like the World Health Organization are genuinely worried, warning us that this digital tide of untruths is eroding our trust in reliable information, especially when it comes to things like vaccines, which are critical for public health. It makes you wonder: are the technologies we’re developing to fight this misinformation – like AI and clever algorithms – actually keeping up with the relentless pace of deception?
For a while there, it seemed promising. In the late 2010s and early 2020s, big social media companies like Facebook and Twitter were actually trying. They put algorithms to work, scanning for potentially false news stories, and they even hired real people – third-party fact-checkers – to verify posts. Dr. Cameron Martel, a marketing professor at Johns Hopkins, led a fascinating study in 2023. He showed over 14,000 Americans headlines – some true, some false – and asked them what they believed and if they’d share them. Half of them saw warning labels on the fake stuff, and the results were pretty compelling: those labels slashed belief in false information by nearly 28% and reduced sharing by about 25%. Even people who didn’t trust fact-checkers much still shared 16% less misinformation. This showed that even a simple nudge could make a big difference. But then, something shifted. In a move that surprised many, Meta (Facebook’s parent company) announced in January 2025 that it was ditching professional fact-checkers. Instead, they’re leaning into “community notes,” where everyday users chime in with their thoughts on a post’s accuracy. If enough people from different political backgrounds upvote a comment, it gets prominently displayed. Dr. Martel believes this could work, but only if the process is crystal clear and fair. His research even suggests that if these “juries” of ordinary folks are big enough, consult with each other, and represent a diverse range of viewpoints, they could be just as, if not more, trustworthy than individual experts.
Now, let’s talk about AI. How do people feel about AI doing the fact-checking? We don’t have a lot of answers yet, but some early research offers a mixed bag. For instance, sophisticated AI models like Perplexity and Grok often agree with human-generated community notes about misleading posts. However, in a significant percentage of cases – 21% to 28% – these AI bots labeled posts as true, even when community notes had identified them as misleading. What’s even more concerning is that when Grok launched on X (formerly Twitter) in early 2025, there was a noticeable drop in community note submissions. This hints that people might be seeing AI as a replacement for human-led fact-checking, rather than a helpful assistant. Dr. Martel points out that AI is really good at spotting “well-debunked conspiracy theories or often-repeated myths.” But it struggles, and often fails spectacularly, when it comes to breaking news. Al Jazeera, for example, reported that Grok had a tough time recognizing AI-generated images in conflict zones and even incorrectly claimed a trans pilot caused a helicopter crash during a breaking news event. As Dr. Martel explains, “Large language models don’t have any existing corpus of information about what’s happening currently.” Yet, we’re seeing anecdotal evidence that people are still trying to use these AI tools to understand unfolding events, which is “troubling.”
Ultimately, Dr. Martel sees a lot of potential if we combine these different approaches: community notes, AI fact-checking, and professional fact-checkers. Imagine AI flagging breaking news that it can’t quite verify and instantly sending it to human experts, or social media users rating the accuracy of AI-checked information. AI and algorithms could even learn from real-time feedback from human fact-checkers. This multi-layered approach has “great promise.” But there’s a big caveat: these systems need to be open about how they work, constantly checked for accuracy, assessed for effectiveness, and continuously improved. And that’s where the hope dwindles. “Right now, it seems like there is no corporate will to invest heavily in these types of content moderation practices,” Dr. Martel admits. “So while I’m theoretically hopeful about these technologies, in practice, I’m less hopeful.” It’s a stark reminder that even the best technology needs a human commitment behind it.
Beyond direct fact-checking, there’s another powerful strategy: “content-neutral” interventions. These aren’t about labeling specific posts as true or false, but about encouraging critical thinking in general. Dr. Hause Lin, a researcher at MIT and Cornell, and a data scientist at the World Bank, explains that since we can’t possibly predict all the “weird content” people will create, it’s better to equip people to spot propaganda tactics and think critically. His 2023 research showed remarkable results. He and his colleagues used Facebook and Twitter ads that simply prompted users to consider the accuracy of information before sharing it. On Facebook, with 33 million users, these prompts led to a 2.6% reduction in misinformation sharing among those who had previously spread it. On Twitter, with over 157,000 users, the reduction was up to 6.3%. While these percentages might seem small, a 6% reduction among millions of users is a massive impact, achieved with relatively “low cost.”
Dr. Lin’s goal was to shift people from an emotional reaction to a more thoughtful one. “When people are scrolling, they are often not thinking reflectively but intuitively,” he says. They see something that makes them angry or excited and immediately want to share it. “If you slow them down just a little bit, and say, ‘Do you want to think more about whether this is true?’ that actually reduces misinformation.” However, even these clever, scalable solutions face a harsh reality: they might not align with a company’s bottom line. Dr. Lin’s own research, for example, showed that celebrity messages aimed at countering hate speech in Nigeria, while effective in reducing the sharing of hateful content, also led to people spending less time on Twitter overall. This highlights an uncomfortable truth: effective interventions sometimes reduce engagement, which can be seen as bad for business. While there’s a growing body of evidence supporting these multi-pronged efforts – from fact-checking to critical thinking prompts – the big question remains: are social media companies willing to truly invest in these initiatives for the greater good of society, even if it means sacrificing some profit? The answer, for now, is still up in the air.

