Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

AI Chatbots Give Misinformation Nearly 50% of the Time

April 29, 2026

China-linked online disinformation campaign targetted the exile Tibetan election

April 29, 2026

Turning Off AIS while transiting Hormuz offers False Sense of Security

April 29, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

AI Chatbots Give Misinformation Nearly 50% of the Time

News RoomBy News RoomApril 29, 20267 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

It’s truly exciting to see how artificial intelligence is beginning to transform so many aspects of our lives, and healthcare, especially oncology, is no exception. We’ve all heard the buzz about AI chatbots, and it’s easy to imagine them as an instant fount of medical wisdom. However, a recent study published in a medical journal has given us a crucial reality check. It turns out that many of the popular, mainstream chatbots most of us have access to – the ones like ChatGPT, Gemini, and Grok – are still quite a long way from being reliable sources for critical medical information. The researchers found that nearly half of the health answers these chatbots generated were “problematic.” This wasn’t just about small errors; we’re talking about misinformation, missing important context, or even confidently stated but incorrect explanations that could potentially be harmful if someone took them at face value. Imagine asking about a scary symptom and getting an answer that, while sounding official, completely misses the mark or even gives you bad advice. This highlights a significant challenge: while AI is incredibly powerful, generic AI tools aren’t yet equipped to handle the nuanced, life-and-death complexities of medical decision-making without specialized design and rigorous oversight. It’s a vivid reminder that when it comes to our health, accuracy and trustworthiness are non-negotiable.

Fortunately, this doesn’t mean all AI tools for health are created equal or that the future of AI in oncology isn’t bright. There are pioneers who are building these tools with a deep understanding of the stakes involved. Take, for instance, SurvivorNet’s “My Health Questions” platform. Unlike those general-purpose chatbots, this tool was specifically engineered to avoid the pitfalls identified in the study. Think of it as a highly trained medical assistant, not a general trivia bot. It’s built on a foundation of established clinical guidelines, rigorously backed by medically reviewed research, and crucially, supported by a network of leading oncologists across the country. This means when you ask a question on “My Health Questions,” you’re not getting a speculative answer from a vast, uncurated database. Instead, you’re receiving clear, trustworthy explanations of treatment options, potential side effects, and how to navigate the complex journey of cancer care. It’s designed to empower patients, caregivers, and even clinicians, giving them the confidence and understanding needed to make informed decisions and truly feel in control of their health journey. Christine Santasiero, a breast cancer survivor, and her sister, Lauren, who was her caregiver, perfectly encapsulated its value, calling it “the perfect second opinion” during Christine’s diagnosis – a testament to its reliability and empathetic design.

The recent study really laid bare the limitations of general generative AI chatbots when it comes to medical information. Researchers put several prominent systems – including Google’s Gemini, OpenAI’s ChatGPT, and XAI’s Grok – to the test, asking them 10 essential questions spanning a range of medical topics from cancer and vaccines to stem cells, nutrition, and even athletic performance. The results were quite humbling for the AI world. A staggering 49.6% of the responses were found to be problematic, with a truly worrying 19.6% being classified as “highly problematic” and potentially harmful if someone were to literally follow their advice. Interestingly, no single chatbot emerged as a clear winner; they all struggled in various areas, although Grok stood out for generating more highly problematic answers than expected. While they performed a bit better on topics like vaccines and cancer, their explanations for stem cells, athletic performance, and nutrition were particularly weak. Adding to the challenge for patients, many of the answers were written at an advanced level, making them difficult for the average person to comprehend. And in a field where source credibility is paramount, the citation quality was subpar, often including incomplete or unverified references. The clear takeaway here is that, for now, we simply cannot rely on these general-purpose chatbots for medical guidance, especially concerning complex or evolving health topics where misinformation can be particularly dangerous.

This is precisely where “My Health Questions” by SurvivorNet shines, demonstrating a truly different approach. It wasn’t designed to be a jack-of-all-trades chatbot; it was meticulously built for the very specific realities and profound challenges of cancer care, addressing the needs of both patients and their caregivers. This tool is uniquely capable of handling both intricate clinical issues and the everyday logistical concerns that arise with accuracy, clarity, and a level of personalization that generic chatbots can’t match. Imagine being able to create a tailored health profile, inputting details like your age, gender, and location. This isn’t just for data collection; it allows the platform to refine and personalize its responses over time, making the information even more relevant to your unique situation. This reflects SurvivorNet’s core mission: to combine cutting-edge technology with the invaluable expertise of medical professionals to transform complex medical information into something accessible and actionable. What truly sets it apart is the human element: it’s rigorously doctor-supported. Leading oncology experts actively review and validate the information, ensuring it’s not only accurate and safe but also easy to understand. The goal isn’t to replace the invaluable role of clinicians; rather, it’s to empower patients, helping them arrive at appointments better prepared, armed with informed questions, and a clearer understanding of their treatment journey. This fusion of AI efficiency with dedicated medical oversight is already demonstrating a profound impact on people facing cancer.

The real-world impact of “My Health Questions” is perhaps its most compelling endorsement. Consider the story of Dr. Maurice Franklin, a seasoned public health educator. When a routine PSA screening revealed elevated levels – a classic warning sign of prostate cancer – he found himself in a terrifying limbo between appointments. He turned to “My Health Questions” for clarity, and what he found was a tool that mirrored the safety-first, empathetic approach clinicians strive for. It didn’t just rattle off information; it started with a crucial human touch: “Have you had the chance to talk to your healthcare provider about these symptoms yet?” When Dr. Franklin, still anxious, confirmed he had, the tool acknowledged his fear, grounding its reassurance in evidence-based information. Then there’s Gabby Cooper, a Penn State graduate undergoing treatment for stage 2 Hodgkin lymphoma. She uses “My Health Questions” to steel herself for doctor visits. When complications like colitis raised concerns about her chemotherapy regimen, the tool helped her formulate precise, informed questions to bring directly to her oncologist, such as, “Should we consider any alternative regimens if colitis remains a problem despite supportive care?” Gabby noted how questions like these would be “super helpful” as she navigated her next appointments. These stories aren’t just anecdotes; they demonstrate how responsible AI, guided by medical expertise, can genuinely empower individuals facing cancer, instilling clarity, confidence, and a renewed sense of hope during incredibly challenging times.

Looking ahead, it’s clear that AI’s role in cancer care is only going to expand, moving beyond general chatbots to specialized, medically integrated applications. While services like SurvivorNet’s “My Health Questions” are already making waves with their accuracy and personalization, the broader landscape of AI in oncology is truly exciting. We’re seeing AI being used in radiology to better detect tumors, in pathology to analyze tissue samples more precisely, in crafting individualized treatment plans, and in providing crucial patient support. In 2024, the American Society of Clinical Oncology (ASCO) released six guiding principles for AI in oncology, underscoring the critical importance of transparency, equity, privacy, accountability, and the non-negotiable centrality of human clinical judgment. This isn’t about AI replacing doctors; it’s about AI augmenting their abilities. One of the most significant promises of AI is its potential to reduce disparities in care, particularly benefiting ethnically diverse patients. As Dr. Basak Dogan, Director of Breast Imaging Research at UT Southwestern’s Simmons Cancer Center, highlights, “Traditional models often perform poorly for patients from diverse backgrounds who may not know their full family history.” AI provides an objective assessment based on an individual’s biology, which can truly “democratize access to high-risk screening.” Dr. Beth Mittendorf, Chief of Multidisciplinary Oncology at Dana-Farber Cancer Institute, eloquently summarizes this future: “AI should help us identify risk earlier, tailor prevention more intelligently, and use specialist resources more effectively. The goal is not to replace clinical judgment, but to augment it.” It’s a vision where technology and human expertise converge to create a more equitable, efficient, and ultimately, more hopeful future for cancer patients worldwide.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Debunking doesn’t stop misinformation online. But researchers found ‘pre-bunking’ can.

MRNA Vaccine Misinformation Threatens Cancer Treatment

Anne Hathaway Says ‘Devil Wears Prada 2’ Didn’t Fire Skinny Models

Letter: Recognize vaccine value amid misinformation

AAFN launches tool to tackle misinformation

Debunking doesn’t stop misinformation online. But researchers found ‘pre-bunking’ can.

Editors Picks

China-linked online disinformation campaign targetted the exile Tibetan election

April 29, 2026

Turning Off AIS while transiting Hormuz offers False Sense of Security

April 29, 2026

Debunking doesn’t stop misinformation online. But researchers found ‘pre-bunking’ can.

April 29, 2026

Türkiye: IFJ and partners condemn escalating use of “disinformation law” against journalists and cal…

April 29, 2026

Minnesota fraud: Man liable for $188K in false food program claims

April 29, 2026

Latest Articles

MRNA Vaccine Misinformation Threatens Cancer Treatment

April 29, 2026

Disinformation center rejects claims of imminent Russian attack on Kyiv | Ukraine news

April 29, 2026

Alta. minister attempts to ‘clarify’ false drug deaths claim

April 29, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.