Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

'They killed my husband': The deadly cost of health misinformation in Congo – Yahoo News Canada

May 7, 2026

Head of SVR disinformation network in Latin America detained in Argentina

May 7, 2026

Meta removes Bangladeshi community archivists’ pages through false copyright claims · Global Voices

May 7, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Artificial intelligence falls for fake disease, spreads medical misinformation

News RoomBy News RoomApril 14, 20265 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

You know those days when your eyes feel scratchy, tired, and maybe a little pink? You might assume it’s just from staring at your computer too long, or maybe a bit of allergies acting up. But imagine if you Google those symptoms and an AI chatbot confidently tells you, “Ah, you might have bixonimania!” You’d probably raise an eyebrow, right? Well, that’s exactly the kind of situation a clever researcher named Almira Osmanovic Thunström, from the University of Gothenburg, set out to create. She didn’t invent a real illness; she invented “bixonimania” to see just how readily cutting-edge artificial intelligence platforms would fall for a fake medical condition and then spread that misinformation as fact. It’s a fascinating, and frankly, a bit alarming look into how easily even seemingly intelligent systems can be tricked and, in turn, misinform us.

Almira’s experiment began subtly. In March 2024, she started planting seeds on Medium, a popular blogging platform. She wrote a couple of posts, describing these eye symptoms – soreness, fatigue, pinkness – as a developing condition linked to too much exposure to blue light. This sounds plausible enough, given our screen-heavy lives, and that’s precisely what made it insidious. Then, in April and May, she upped the ante by publishing two seemingly academic papers on SciProfiles, an academic network. To really sell the illusion, she used a pseudonym, “Lazljiv Izgubljenovic,” and even used an AI-generated photo for the author. What’s truly wild is that Almira wasn’t exactly hiding her tracks. As Nature magazine, who originally broke this story, pointed out, she embedded numerous clues that bixonimania wasn’t real, starting with its very name. She deliberately chose “mania” – a psychiatric term – to signal to any medical professional that this wasn’t a legitimate eye condition. It was her way of winking at the system, saying, “Look closely, this is a joke!”

And the Easter eggs didn’t stop there. Throughout her fake papers, Almira included references to universities that were clearly made up, like “Asteria Horizon University,” “The Starfleet Academy” (a fun nod to Star Trek!), and “the University of Fellowship of the Ring” (a Lord of the Rings reference). She even went so far as to include direct, undeniable statements within the papers themselves, explicitly discrediting the entire research. Imagine reading a scientific paper that says, straight up, “this entire paper is made up” and “fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group.” You’d think, surely, no respectable AI or even human reviewer would miss that, right? But here’s where the experiment takes a worrying turn. Despite these glaring red flags, these highly sophisticated AI platforms, designed to process and understand vast amounts of information, completely swallowed the bait.

The consequences were swift and surprisingly widespread. By April 2024, just a month after Almira started her blog posts, the fake condition was being propagated by major AI platforms as if it were legitimate. Microsoft Bing’s Copilot, for instance, described “bixonimania” as a “rare condition.” Google’s Gemini, when asked about itchy eyes, actually recommended users see an ophthalmologist with concerns about this invented ailment. Perplexity AI, another AI search engine, confidently reported statistics, claiming that one in 90,000 individuals had contracted “the disease.” And perhaps most concerningly, OpenAI’s ChatGPT began diagnosing users with it. It was like a game of digital telephone, but instead of human whispers, it was powerful algorithms amplifying a fabricated illness, turning fiction into perceived fact in the blink of an eye. These platforms, which many people trust for reliable information, were now actively spreading medical misinformation.

The ultimate irony, and a truly unsettling development, came when “bixonimania” managed to infiltrate a scientific journal. Cureus, a publication under the reputable Springer Nature umbrella, actually published a paper that cited Almira’s fake research as legitimate sources. This wasn’t just an AI misstep; this was a breakdown in the human-driven peer-review process that is supposed to safeguard scientific integrity. As Alex Ruani, a doctoral researcher in health misinformation at University College London, aptly put it, this marked a “failure of the scientific process.” It wasn’t until Nature magazine, the same publication that broke the story of Almira’s experiment, alerted Cureus editors about the fraudulent citations that the paper was finally retracted, nearly two years later, in March 2026. This incident really highlights how easily misinformation, once it gains a foothold, can cascade through seemingly authoritative channels.

When Nature and Breitbart reached out to the AI companies involved, the responses were either silence or an assertion that their AI technology had “improved significantly” since Almira’s clever experiment. While improvement is always welcome, this incident serves as a stark reminder of the fragile line between innovation and irresponsibility in the age of AI. Ruani’s chilling observation perfectly encapsulates the gravity of the situation: “If the scientific process itself and the systems that support that process are skilled, and they aren’t capturing and filtering out chunks like these, we’re doomed.” He further characterized it as a “masterclass on how mis- and disinformation operates.” This isn’t just about a made-up eye condition; it’s a profound warning about the potential for AI to undermine the very foundations of trust and knowledge, especially in critical areas like public health. It forces us to ask: if AI can be so easily fooled by something so obviously fake, how much other, more subtle misinformation might it be absorbing and then feeding back to us as truth? The consequences, particularly in healthcare, could be truly devastating.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

'They killed my husband': The deadly cost of health misinformation in Congo – Yahoo News Canada

Fake rumors, real killings: Inside Congo’s deadly health misinformation crisis

UN: Ad spending tops $1T but AI risk fueling misinformation

California elections officials urge early mail-in voting, warn about 'misinformation' – The Daily Gazette

SA on high alert, but beware of misinformation campaign on anti-immigration debate

Inside Housing – News – Misinformation risks undermining real causes of housing crisis

Editors Picks

Head of SVR disinformation network in Latin America detained in Argentina

May 7, 2026

Meta removes Bangladeshi community archivists’ pages through false copyright claims · Global Voices

May 7, 2026

AI disinformation tests South Korean laws ahead of local elections

May 7, 2026

Fake rumors, real killings: Inside Congo’s deadly health misinformation crisis

May 7, 2026

Curator of Russian disinformation network detained in Argentina – CCD

May 7, 2026

Latest Articles

Pakistan’s ‘false’ claims during Operation Sindoor fail scrutiny: Top US warfare expert

May 7, 2026

The AI fitness instructors selling unreal gains

May 7, 2026

UN: Ad spending tops $1T but AI risk fueling misinformation

May 7, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.