Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Officials fear election misinformation may be dissuading voters

May 10, 2026

Maharashtra Govt Urges Citizens Not To Believe Census Misinformation

May 10, 2026

When people feel angry, they are more likely to spread news from unreliable sources.

May 10, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

AI chatbots fall for fake diseases and phony studies

News RoomBy News RoomApril 18, 2026Updated:April 18, 20265 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

The Curious Case of Bixonimania: When AI Falls for a Prank, and We Learn a Hard Lesson

Imagine a world where the information you seek, the advice you trust, and even the diagnoses you receive could be rooted in a elaborate prank, dreamed up by a mischievous group of researchers. This isn’t a dystopian novel; it’s practically yesterday’s news, thanks to a fascinating and slightly terrifying experiment conducted by a team of Swedish researchers. Led by the brilliant and bold Almira Osmanovic Thunström at the University of Gothenburg, this team cooked up a completely fictitious medical condition, a ludicrous eye ailment they dubbed “bixonimania.” Their goal? To see if the seemingly infallible brains of artificial intelligence chatbots – those ever-present digital assistants we increasingly rely on for everything from dinner recipes to medical queries – would fall for their elaborate ruse. And fall they did, hook, line, and sinker.

The brilliance of the bixonimania scam lay in its sheer absurdity, a kind of medical cartoon that should have screamed “fake” to anyone with a modicum of scientific literacy. Bixonimania, according to the researchers, was a condition characterized by “pinkish eyelids” brought on by too much screen time or excessive eye-rubbing. The symptoms? Sore and itchy eyes – pretty vague, right? But the researchers didn’t stop there. To give their imaginary disease an air of legitimacy, they fabricated scientific papers, complete with fictional authors. The lead researcher was named Lazljiv Izgubljenovic, a delightfully cheeky nod that translates to “The Lying Loser” in Bosnian. To add another layer of playful deception, his photo was, of course, AI-generated. The acknowledgments section of these fake papers thanked “Professor Sideshow Bob” and even a professor from the Starfleet Academy, with access granted to a lab aboard the USS Enterprise. This wasn’t subtle; this was practically a clown parade, audacious and undeniable in its fictionality.

You might think such obvious jokes would be a dead giveaway, but Osmanovic Thunström clarifies that their experiment wasn’t about a simple “gotcha” on AI. Instead, it was a profound reflection on something far more human. She told The Post that the real target wasn’t AI’s intelligence, but “rather a reflection of how humans have forgotten to be skeptical when presented information.” The very name “bixonimania” was chosen for its ridiculousness, specifically to signal to any actual medical professional that it was a fabrication. “No eye condition would be called mania — that’s a psychiatric term,” she explained, highlighting the fundamental absurdity that should have been a red flag. Yet, despite these glaring clues, the AI chatbots – ChatGPT, Google’s Gemini, Microsoft’s Copilot, and others – happily swallowed the nonsensical bait. They began to dish out serious-sounding medical advice about bixonimania, warning users about pinkish eyelids, blue-light damage, and even urging them to see an ophthalmologist for this entirely imaginary condition. Their confidence in their manufactured “knowledge” was unflappable, reflecting information back to users as if it were established medical fact.

The ramifications of this experiment extended far beyond the digital realm of chatbot conversations. The fabricated disease, bixonimania, began to leak into the wider informational ecosystem. Blog posts explaining bixonimania mysteriously appeared on platforms like Medium. And in a truly astonishing turn of events, the fake papers, complete with their fictional authors and ridiculous acknowledgments, even started to get cited in legitimate, peer-reviewed literature. Articles about a disease that never existed, based on studies that were clearly a joke, popped up on academic sites and social networks like SciProfiles. This wasn’t just AI falling for a prank; it was a demonstration of how quickly misinformation, even when intentionally absurd, can spread and gain a veneer of credibility within our interconnected information landscape. Eventually, the hilarious yet sobering experiment was exposed by Nature magazine, pulling back the curtain on this audacious scientific prank.

The internet, naturally, had a field day. Social media erupted with a mixture of amusement and genuine concern. “OMG. NOT good,” exclaimed one commenter on X, expressing the widespread unease. Another warned, “That is not the only disease they made up,” hinting at the broader problem of AI-generated misinformation. A third commenter playfully referenced “turbocancer,” another fictitious ailment that has previously circulated online, highlighting the recurring nature of such hoaxes. While the online world was busy debating the implications, real-world doctors found themselves on the front lines of a new challenge. Dr. Darren Lebl, a medical professional, aptly noted the growing trend of patients arriving at appointments armed with chatbot-generated “diagnoses,” ready to challenge medical professionals with information that, in reality, might have been invented mere minutes before by an unsuspecting AI. This influx of potentially fabricated “medical knowledge” presents a significant hurdle for healthcare providers trying to deliver accurate and effective care.

Despite the shocking revelations, Almira Osmanovic Thunström still believes that Large Language Models (LLMs) have a place in medicine. However, the caveat is clear: their integration must be with extreme caution and critical oversight. Tech companies, for their part, have begun to respond. A Microsoft spokesperson stated that “Copilot is designed to be a safe and helpful tool for advice, feedback, general information, and creative help. It is not a substitute for professional medical consultation … we remain committed to continuous improvement of our AI technologies.” Similarly, an OpenAI spokesperson emphasized the extensive work their team has done, involving “hundreds of clinician advisors to stress-test the models powering ChatGPT, identify risks, and improve how they respond to health questions.” They also suggested that “studies conducted before GPT-5 reflect capabilities that users would not encounter today.” Google, interestingly, remained silent, not responding to requests for comment. This episode serves as a powerful reminder that while AI offers incredible potential, it also demands our discernment and an unwavering commitment to critical thinking. The bixonimania experiment wasn’t just a prank; it was a potent warning shot, reminding us to always question, always verify, and never blindly entrust our well-being to algorithms, no matter how sophisticated they may seem.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

AI tools to help centre catch fake Ayushman claims | India News

Josh Shapiro sues Character.AI over fake doctors

AI Fakes the Founder and Keeps the Money

Hackers Using Fake Claude AI Installer Pages to Trick Users Into Running Malware on Their Systems

Fake Claude AI website delivers new ‘Beagle’ Windows malware

Italian PM Giorgia Meloni Denounces AI-Generated Deepfakes as a Threat, ETEnterpriseai

Editors Picks

Maharashtra Govt Urges Citizens Not To Believe Census Misinformation

May 10, 2026

When people feel angry, they are more likely to spread news from unreliable sources.

May 10, 2026

Climate Ministry Launches Manual to Counter Fake News After Washing Machine Rumor – 조선일보

May 10, 2026

False pretenses

May 10, 2026

False ceiling at Special Newborn Care Unit in Bhind hospital in MP collapses, injuring four breastfeeding mothers

May 10, 2026

Latest Articles

Ghana climbs Press Freedom rankings, but new threats are closing in – British High Commissioner

May 10, 2026

2026 midterms voter trust misinformation political divide

May 10, 2026

Elgin man who police say gave false name when arrested in Woodstock with cocaine pleads guilty – Shaw Local

May 10, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.