Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

The rise of the misinformation battlefield – Nikkei Asia

April 17, 2026

Malaysia’s subsidised petrol scheme targeted with false and misleading claims amid fuel crisis

April 17, 2026

Fact-Check: Of Misinformation Around Noida Workers Protest, West Bengal Elections & More

April 17, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

AI chatbots frequently give inaccurate or incomplete health information: Study

News RoomBy News RoomApril 15, 20266 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

The Risky Reality of AI Health Advice: A Wake-Up Call

Imagine a world where you could ask any health question, no matter how complex, and receive an instant, comprehensive answer. This is the promise of generative AI chatbots – digital assistants designed to understand and respond to our queries in natural language. From deciphering symptoms to exploring treatment options, these chatbots have rapidly permeated various aspects of our lives, from research and marketing to even medicine. Their accessibility and ease of use have led many to embrace them as a readily available source of information, almost like a personal health encyclopedia. However, a recent and eye-opening study published in The British Medical Journal (BMJ) Open casts a stark shadow on this seemingly convenient future. It reveals that the medical information provided by these seemingly intelligent machines is often far from accurate and, in many cases, dangerously incomplete. This isn’t just about a few minor errors; the study found that nearly a staggering half of the chatbot responses exhibited “problematic” characteristics, often presenting a deceptive balance between scientifically proven facts and unverified, sometimes harmful, claims.

The researchers, a dedicated team including experts from The Lundquist Institute for Biomedical Innovation at Harbor-University of California Los Angeles (UCLA) Medical Center, defined a “problematic response” with a clear and unsettling criterion: any information that could realistically steer an ordinary person towards an ineffective treatment or even cause them harm if followed without the crucial oversight of a medical professional. This isn’t a hypothetical fear; it’s a very real concern for patients seeking quick answers online. The rapid and widespread adoption of these generative AI chatbots, with people increasingly relying on them as their primary search engines for health-related queries, creates a perilous landscape. Without concerted efforts in public education and robust oversight mechanisms, the continued unfettered deployment of these AI tools risks not only amplifying existing misinformation but also generating entirely new pathways for dangerous, unscientific advice to proliferate. The study serves as a critical warning: while the allure of instant answers is strong, the potential for serious health consequences stemming from unchecked AI advice is a looming threat that we must address with urgency and responsibility.

To understand the scope of this issue, the researchers meticulously put five of the most widely used and publicly available generative AI chatbots to the test. These included familiar names like Google’s Gemini, the lesser-known but powerful High-Flyer’s DeepSeek, Meta AI by Meta, the incredibly popular Open AI’s ChatGPT, and Grok by xAI. The experiment wasn’t a casual exploration; it was a carefully structured interrogation. Each chatbot was presented with 10 open-ended and closed questions, systematically spread across five crucial health categories: cancer, vaccines, stem cells, nutrition, and athletic performance. These categories were chosen for their complexity, the prevalence of misinformation surrounding them, and their direct impact on public health. The prompts themselves were ingeniously designed to mimic real-world scenarios. They weren’t just standard medical questions; they were crafted to resemble common “information-seeking” health and medical queries that a layperson might type into a search bar. Furthermore, the prompts were deliberately infused with language often found in online misinformation and even in sophisticated academic discourse, aiming to simulate the diverse and sometimes challenging nature of real-world inquiries.

The researchers went beyond simply asking basic questions; they aimed to “stress test” the AI models, intentionally pushing them to their limits to uncover potential vulnerabilities. This involved strategically phrasing prompts to “strain” the chatbots towards misinformation or even contraindicated advice – essentially, trying to coax them into providing bad guidance. The objective was not to trick the AI but to understand its resilience and susceptibility to being led astray, especially when presented with leading or subtly biased information, which is unfortunately common in the online health landscape. Following this rigorous questioning, the responses generated by each chatbot underwent a scrupulous evaluation by a panel of experts. The responses were meticulously categorized into three distinct levels: “non-problematic,” “somewhat problematic,” or “highly problematic.” This categorization wasn’t subjective; it was based on an objective, pre-defined set of criteria, ensuring consistency and fairness in the assessment.

A crucial aspect of this evaluation involved a thorough scoring of the information contained within each response for both its accuracy and completeness. It wasn’t enough for an answer to be partially correct; it needed to be comprehensive and free from misleading omissions. However, a particular point of concern and a key focus of the study was whether the chatbots presented a “false balance” between science and non-science-based claims. This means observing if the AI gave equal weight or credibility to scientifically validated facts and unsubstantiated theories or unproven remedies, regardless of the robust evidence (or lack thereof) supporting each claim. This “false balance” is a particularly insidious form of misinformation, as it can make unscientific ideas appear equally legitimate to a lay user, leading them down potentially dangerous paths. The study’s methodology, therefore, wasn’t just about identifying outright errors; it was about uncovering the subtle, yet profoundly impactful, ways in which AI chatbots can distort medical reality and inadvertently guide users towards potentially harmful choices.

In essence, this groundbreaking study serves as a critical alarm for anyone who interacts with AI for health information. It highlights a troubling disconnect between the perceived intelligence and reliability of these advanced tools and their actual performance when it comes to the nuances and critical importance of medical advice. The researchers’ findings underscore the urgent need for caution, skepticism, and robust educational initiatives to help the public navigate this new digital health landscape responsibly. Without immediate and collective action from AI developers, policymakers, and consumers alike, the convenience of instant answers from chatbots risks being overshadowed by the very real danger of misleading information, potentially undermining public health and leading individuals to make choices that could jeopardize their well-being. The promise of AI in healthcare is immense, but this study reminds us that its deployment must be accompanied by unwavering vigilance and a steadfast commitment to accuracy, completeness, and above all, patient safety.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

The rise of the misinformation battlefield – Nikkei Asia

Fact-Check: Of Misinformation Around Noida Workers Protest, West Bengal Elections & More

Benue Police Reaffirm Crackdown on Banditry, Warn Against Misinformation and Cattle Rustling

Philippines Government Takes Action Against Misinformation on Me

ANALYSIS: Three Fronts Where Misinformation Undermine Security Efforts

Palaniswami accuses Stalin of ‘misinformation’ on delimitation, targets DMK governance

Editors Picks

Malaysia’s subsidised petrol scheme targeted with false and misleading claims amid fuel crisis

April 17, 2026

Fact-Check: Of Misinformation Around Noida Workers Protest, West Bengal Elections & More

April 17, 2026

Alethea Emphasizes Real-Time Monitoring Role in High-Velocity Disinformation Campaigns

April 17, 2026

Eastern Cape Premier Oscar Mabuyane to sue Malema over false master’s degree claims

April 17, 2026

Obscenity and disinformation flood social media

April 17, 2026

Latest Articles

Benue Police Reaffirm Crackdown on Banditry, Warn Against Misinformation and Cattle Rustling

April 17, 2026

Clerides seeks to annul search warrant as police continue investigations into fake-message apps

April 17, 2026

Bulgaria prepares for disinformation ahead of snap elections

April 17, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.