It feels like we’re caught in a strange, unsettling rerun, doesn’t it? Just as we were starting to breathe a little easier after the monumental challenge of the COVID-19 pandemic, a new, albeit much smaller, health concern has emerged – a hantavirus outbreak on a cruise ship. And wouldn’t you know it, with this new worry, comes a familiar chorus of voices online, resurrecting the very same anxieties and baseless claims that made navigating COVID-19 so much harder for all of us. On platforms like X and TikTok, some users are already declaring this outbreak a grand conspiracy, designed to sway elections or even suggesting, ludicrously, that hantavirus is a mysterious side effect of the COVID vaccine. Others are sounding alarms about impending lockdowns and forced vaccinations, even though there’s been absolutely no official discussion of such measures, and certainly no readily available vaccine for hantavirus. These posts, fueled by fear and outright falsehoods, have tragically garnered millions of views, pulling countless people back into the vortex of doubt and mistrust we thought we’d escaped.
Yotam Ophir, a keen observer of misinformation and conspiracy theories at the University at Buffalo, puts it perfectly: “The conspiracy theories from Covid-19 never really died. They lay dormant for a few years.” It’s a chilling thought, isn’t it? These deeply entrenched narratives, rather than fading away, were simply biding their time, ready to resurface the moment a new health scare provided an opportunity. Public health experts are quick to reassure us that the hantavirus, which rarely spreads from person to person, is nowhere near as threatening as COVID-19, a virus that tragically claimed over 7 million lives worldwide. But their reassurance is tinged with a palpable concern – not about the hantavirus itself, but about the alarming speed with which these familiar conspiracy theories have taken hold once again. They worry that even if this hantavirus situation is swiftly contained, it’s a stark warning sign, suggesting that when the next major health crisis inevitably arrives, public officials will face immense resistance and an uphill battle to gain the cooperation needed to protect us all. As Dr. Ophir grimly notes, “The next time when we need to face a big challenge as a society, we’re just not in a good place to cope with it.” It’s a sobering realization that the lessons we should have learned from COVID-19 about combating misinformation seem to have been forgotten by too many, leaving us vulnerable to the same old tactics.
A significant part of this recurring problem, explains Dr. Ophir, is that the mountain of misinformation and deep-seated distrust generated during the COVID pandemic was never truly addressed or healed. It festered, quietly eroding the very foundations of public trust. Think about it: a 2024 survey revealed that a staggering one-quarter of respondents still wrongly believe COVID vaccines caused thousands of deaths, years after the shots were widely administered and proven safe and effective. And in another 2023 survey, more than a third of Americans still clung to the notion that the virus behind COVID-19 was deliberately released – a theory utterly unsupported by any credible evidence. This lingering doubt isn’t just abstract; it has tangible consequences. Disturbingly, some individuals who were instrumental in spreading COVID misinformation and deliberately undermining public health institutions are now in positions of influence within those very institutions. Robert F. Kennedy Jr., for example, whose past problematic statements about the coronavirus targeting specific ethnic groups drew widespread criticism, now serves as Health Secretary. It’s a bewildering turn of events that further deepens the chasm of distrust.
Adding another layer to this complex issue is the enduring legacy of the COVID pandemic: a well-established infrastructure of online influencers. These individuals, whether intentionally or not, built their platforms around health misinformation during the pandemic, and now, they represent a ready-made network for spreading new conspiracy theories. As experts point out, this makes it easier than ever for false narratives to proliferate. They’ve consistently deployed these same tactics in response to other public health concerns, from measles outbreaks to bird flu scares. John Gregory, who spearheads the health misinformation team at NewsGuard, an organization dedicated to tracking false online narratives, aptly describes it as following “the same playbook.” He likens it to a “conspiracy theory Mad Libs,” where they simply swap out the specific health threat, but the underlying conspiracy remains chillingly familiar. It’s a formula designed to exploit fear and uncertainty, and it’s proving depressingly effective in the current climate.
The accounts that are gaining the most traction with this new wave of misinformation are often the same ones that were prominent during the pandemic, showcasing a disturbing consistency in their approach to health crises. Take Dr. Mary Talley Bowden, a Texas physician who became known for advocating ivermectin as a COVID treatment. Just last week, she posted on X suggesting that ivermectin “should work” against hantavirus, despite a complete lack of strong evidence for its effectiveness against either virus. This single post astonishingly garnered 3.5 million views in just one day, according to NewsGuard. When approached for comment, Dr. Bowden simply directed inquiries to her upcoming book, sidestepping the urgent need for accurate public health information. Further amplifying these claims, former congresswoman Marjorie Taylor Greene, who was previously banned from Twitter for violating its COVID misinformation rules, reposted Dr. Bowden’s comments, adding millions more views to the already viral disinformation. These examples highlight the potent reach of these established misinformation channels and their continued impact on public perception of health threats.
Social media platforms bear a significant share of the responsibility in this cycle, acting as amplifiers for disinformation. Their algorithms and revenue-sharing policies are often structured in ways that inadvertently reward sensationalized content, regardless of its accuracy. This creates a perfect storm for false narratives to spread like wildfire. What’s more, the rapid advancements in artificial intelligence tools have intensified this challenge exponentially. It’s now frighteningly easy to produce fake photographs and short videos that are almost indistinguishable from genuine information. Alethea, a digital risk analysis company, identified one TikTok video featuring an AI-generated map of hantavirus cases, littered with dozens of ominous red clusters stretching across the globe. The chilling reality, however, is that fewer than a dozen cases have actually been confirmed. Another AI-generated image circulating on X on May 6 purported to show a ghostly, ashen man being led off a boat – not even the MV Hondius, the actual ship involved in the outbreak. The accompanying caption falsely claimed more Americans were onboard than there were and that they had already disembarked. This image alone garnered 2.5 million views. As Manny Ahmed, founder and CEO of Open Origins, a London-based company that detects fabricated images, starkly puts it, during COVID-19, you still needed a tiny “inkling of truth” to build believable disinformation. “Now,” he warns, “you can just generate entire new scenes. And that is just a capability that misinformation actors didn’t have before.” This new frontier of AI-powered deception makes the fight against health misinformation an even more daunting and critical challenge than it was during the pandemic, threatening our collective ability to respond effectively to future crises.

