It’s a scary thought: doctors across Canada are witnessing firsthand how information dished out by artificial intelligence is putting their patients’ health in jeopardy. A recent survey, bringing together 645 Canadian physicians, revealed a near-unanimous concern: 97% of these doctors have had to step in and correct false or misleading health advice that patients picked up online, including from AI. This isn’t just a hunch; it’s backed by the Canadian Medical Association’s (CMA) 2026 Health and Media Tracking Survey, which found that people who followed AI-generated health advice were five times more likely to suffer negative consequences. Dr. Bolu Ogunyemi, CMA president-elect and a clinical associate professor of medicine, explains that we’re in a “rapid time of change” regarding how patients get health information, and some of what they’re seeing is “causes for concern.” He states that relying on AI for health decisions “can be harmful to patients.” The lure of AI is its speed and convenience, and a big reason patients turn to it is the dwindling access to primary care. Dr. Ogunyemi notes that only about 27% of patients actually trust the health information AI provides. However, in a twist of practical desperation, many patients, lacking timely access to a family doctor or emergency care, feel that “some information may be better than no information,” leading them to scour the internet despite their reservations.
The Canadian Medical Association isn’t just wringing its hands; they’re actively engaging with government officials in Ottawa through their new Physician Advocacy Network to tackle this growing problem. Dr. Ogunyemi highlights the crucial role the federal government can play in reining in the harm caused by false online health information. He proposes the reintroduction of Bill C-63, also known as the Online Harms Act. This bill, designed to address harmful online material, could give the government the “teeth” needed to hold those spreading false health information accountable. He further points to the Controlled Drugs and Substances Act, which already allows for penalties against those selling ineffective or harmful medications, even when promoted online rather than in person. Beyond legislative action, Dr. Ogunyemi stresses the urgent need to improve access to family physicians across Canada and to ease the immense pressures currently faced by doctors. He cites data showing that 85% of people consider their doctor their most trusted source of health information. Yet, with one in six Canadians lacking a family doctor, it’s understandable why they seek information elsewhere. A comprehensive solution, he argues, involves expanding healthcare capacity through team-based care and making the work environment for family doctors more appealing, encouraging more medical professionals to choose this vital specialty.
Doctors, too, have a part to play in making trustworthy healthcare information more accessible. Dr. Ogunyemi shares details about the CMA’s “Healthcare for Real” program, an initiative on social media platforms like Instagram. Through this program, doctors offer engaging, digestible information about navigating the health system and finding reliable health sources. He emphasizes the importance of adapting their messages and mediums to effectively reach patients, ensuring there are abundant, trustworthy health information sources readily available. However, Dr. Ma’n H. Zawati, an associate professor and research director at McGill University, warns that AI has significantly altered the landscape of medical misinformation, making it even more insidious. He explains that AI can generate seemingly “competent, authoritative answers” that are, in fact, incorrect or overly simplistic. For instance, he cautions that a chatbot might confidently dismiss a serious symptom as harmless or suggest an inappropriate medication dosage. Even more concerning, AI is capable of fabricating studies, recommendations, or scientific links that simply don’t exist. The core of the problem, Dr. Zawati identifies, is that “we’ve created a system that sounds like a doctor and does not know the patient.” This lack of context – ignoring a patient’s medical history or risk profile – is a critical flaw in AI-generated advice.
The consequences of this new wave of misinformation are profound, integrating into the daily routines of medical professionals. Dr. Zawati observes that doctors are no longer solely focused on diagnosis and treatment; they are increasingly spending valuable time correcting patients’ existing beliefs, many of which stem from online sources. This, he says, is no longer a “fringe issue” but is rapidly becoming “systemic.” He highlights that the danger isn’t just misinformation itself, but “misinformation that sounds credible and personalized,” a problem exacerbated by the existing fragmentation within healthcare systems. In an era where expertise seems undervalued, there’s a “false equivalence between validated data and something that is just spewed up from an influencer.” Adding another layer of complexity, AI responses can be inherently biased, depending on their training data. Dr. Zawati points out that AI datasets often overrepresent lighter skin tones, leading to less accurate diagnoses of melanoma on darker skin. Similarly, with a significant majority of genomic data stemming from individuals of European descent, AI models may produce less reliable predictions for other populations. This bias can have life-threatening implications, as seen in the potential underdiagnosis of cardiovascular disease in women, given that many datasets focus on “classic symptoms of chest pain,” more prevalent in men, while women often present with different indicators like fatigue and nausea. Even mental health data can be skewed, with AI trained on specific linguistic patterns potentially misinterpreting distress expressed in different cultures, leading to misclassification or underestimation of symptoms in many individuals.
To combat this deluge of misleading information, Dr. Zawati offers practical advice for patients to help them discern credible health information from the rampant misinformation. First and foremost, he urges people to “check the source,” prioritizing information from “recognized health institutions such as hospitals and licensed professionals,” and to entirely avoid anything anonymous or originating from influencer content. Secondly, he advises a healthy dose of skepticism. He explains that in the context of health and medicine, “anything that sounds absolute or too definitive, especially online, is often a red flag.” Good medical advice, he notes, is usually nuanced, a quality rarely found in a chatbot’s straightforward answer. Finally, he encourages individuals to view online tools merely as a “starting point.” Crucially, he stresses that online information should “never replace a conversation with a healthcare professional who knows their situation.” If something read online causes a patient to reconsider a treatment, that’s precisely the moment to discuss it with their doctor, not to act on it independently. The message from Canada’s medical community is clear: while AI offers undeniable convenience, its current output demands extreme caution and critical evaluation, underscoring the irreplaceable value of human medical expertise and personalized care.

