It feels like we’re constantly swimming against a relentless current of fake news, outright lies, and sneaky half-truths. It’s truly overwhelming. Just think about it: over half of us now get our daily dose of news from social media, yet a tiny fraction, less than a third of those creating this content, bother to fact-check anything. What’s worse, a staggering 80% of influencers have been flagged by consumer protection agencies for misleading advertising. And now, with the rise of AI, this problem has multiplied exponentially. Creating convincing fakes, from deepfake videos to fabricated articles, is easier and cheaper than ever before. We’ve entered a strange new world where, even when we understand the tricks, it’s incredibly hard to tell what’s real from what’s made up. It’s a disorienting reality, and honestly, most of us hate living in it. We like to believe we’re intelligent, discerning individuals who would never intentionally spread false information. But what if, without even realizing it, we’re part of the problem? What if, unknowingly, you’re actually a misinformation superspreader? The author suggests it’s far more likely than we might care to admit, and his own experience serves as a stark warning.
The author, a professional writer and speaker, candidly shares his personal stumble into the world of misinformation. He admits that, usually, he prides himself on his ability to sniff out nonsense. Most days, he’s probably right. But he recently discovered a mortifying truth: a case study he’d been confidently sharing for years turned out to be completely fabricated. Oops! He had unknowingly become a conduit for a widespread, yet entirely untrue, anecdote. He’s talking about the famous “Great Horse Manure Crisis.” You’ve probably heard it – it’s a common tale. The story goes that in the late 1800s, London’s overflowing horse population created millions of pounds of manure daily, threatening to bury the city under ten feet of it. The Times of London supposedly predicted this grim future, and an international urban planning conference in New York in 1898 was allegedly abandoned because no solution could be found for this “unsolvable” crisis. Then, miraculously, the motorcar arrived, and the problem simply vanished. It’s presented as a clever parable about how difficult it is to predict the future, the rapid pace of technological change, and humanity’s adaptability to disruption. It’s a compelling, thought-provoking narrative, which is precisely why it’s so widely shared.
The catch? It’s all a lie. Despite what seemed like credible sources, including a supposed Times of London article, the entire tale is fiction. There was no widespread horse manure crisis of that magnitude, and the Times of London never published such a prediction. That urban planning conference in New York was never abandoned, because it simply never happened. As the author ruefully admits, he had been sharing this and similar stories for years, completely unaware he was spreading misinformation. His experience serves as a powerful reminder: if it can happen to a seasoned professional, it can happen to anyone. This realization sparked his mission to help others avoid the same pitfall, offering five practical tips to identify misinformation and prevent inadvertently becoming a superspreader. It’s a crucial call to action in our increasingly complex information landscape, emphasizing that personal vigilance is key to combating the relentless tide of falsehoods.
His first piece of advice is refreshingly simple, yet shockingly often ignored: check before you share. How many of us have hastily reposted an article or video just because the headline was attention-grabbing, without ever bothering to read or watch the full content? Most of us are guilty of this. Research shows that about 75% of content shared on social media is reposted without the sharer actually reading it first. This habit is a direct route to becoming a misinformation superspreader. We can’t control everything that pops up in our feeds, but we absolutely choose what we amplify. A good rule of thumb, he suggests, is to only share things you’d be happy to be held personally responsible for, or things you’d be proud to have authored yourself. At the very least, take a few minutes to read the content before you put your name and credibility on it by sharing it with your network. This simple act of pausing and consuming the full story can drastically reduce the spread of inaccuracies.
Next, he urges us to distrust our emotions. It’s a cold hard truth that lies often travel further and faster than the truth. This isn’t accidental; fake news is expertly crafted to trigger strong emotional responses – to make us feel upset, confused, excited, or furious. When we’re in a heightened emotional state, we’re far more prone to impulsive actions, like clicking that “share” button. This emotional manipulation is the very essence of clickbait. If you’re watching or reading something online and find your blood boiling, your anger rising, or you’re overcome with a powerful sense of outrage, that doesn’t automatically mean the content is fake. Some real stories do have a profound emotional impact. However, a strong emotional reaction should serve as a red flag, a cue to pause and dig deeper, rather than immediately hitting “like” and “share.” Our emotions can be easily exploited, making critical thinking take a backseat.
Finally, the author wisely advises us to demand expertise and look for incentives, and then, crucially, to correct the record. While no source is infallible, some are clearly less reliable than others. For example, he points out that TikTok has seen over six billion views related to ADHD, yet less than 50% of that information is clinically accurate. That’s billions of instances of ADHD-related misinformation being shared by ordinary people. This shouldn’t be surprising, as most TikTok creators discussing ADHD have no medical qualifications. We tend to believe them anyway. While qualifications don’t guarantee infallibility, a complete lack of them certainly isn’t a sign of reliability. When dealing with complex subjects like health, technology, or climate science, be wary of unqualified individuals presenting opinions as facts. This leads directly to the question of incentives: why is this person sharing this information? There’s an entire industry whose business thrives on engagement. Every click, every share, every view translates into revenue for them. Their incentive isn’t accuracy or truth; it’s virality. We often get what we incentivize. Therefore, when evaluating content, ask yourself: is the source motivated by a desire to convey truth, or by something else entirely, like clicks, views, or ideological propagation? And if, after all this scrutiny, you discover someone has shared something untrue, don’t just stay silent. Become a force for truth by gently correcting the record. A simple, “Hey, interesting article, but you might be surprised to know it turns out this isn’t true – just thought you’d want to know!” can go a long way. No single person has all the answers, but by taking individual responsibility for what we consume and share, together, we can make a significant difference in the ongoing battle for objective truth.

