Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Beyond the Spectrum: Campaign uses milk cartons to combat misinformation about autism in Brazil

April 9, 2026

News anchor makes false claim about commissioning status of Kugbo Bus Terminal

April 9, 2026

Chinese military slams ‘disinformation’ on claims of supplies to Iranian military, satellite images of US bases

April 9, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Understanding how users identify health misinformation in short videos: an integrated analysis using PLS-SEM and fsQCA

News RoomBy News RoomApril 9, 20267 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Navigating the Digital Health Maze: How We Spot False Information in Short Videos

In today’s fast-paced digital world, platforms like TikTok and YouTube Shorts have become our go-to for almost everything, including health information. With just a few taps, we can find videos explaining everything from new diets to miracle cures. It’s a double-edged sword: while these short videos make complex health topics easier to grasp and more accessible, they also open the floodgates to a lot of misleading or even outright false information. Imagine a world where anyone can create and share health advice, backed by nothing but a catchy tune or a confident smile. Add to that the rise of AI-generated content, capable of churning out convincing but fake health videos at an alarming rate, and you’ve got a recipe for confusion and potential harm. This situation highlights a critical question: how do ordinary people, like you and me, figure out what’s true and what’s false when scrolling through endless health-related short videos?

The dangers of health misinformation aren’t just theoretical; they’re very real. False health claims can lead us to make poor choices about our well-being, erode our trust in legitimate medical advice, and even become dangerous during public health crises, as we saw with the COVID-19 pandemic. Studies have repeatedly shown that social media, especially short-video platforms, are rife with inaccurate health information, from unproven treatments for serious illnesses to oversimplified medical advice that could do more harm than good. This isn’t just about harmless internet fads; it’s about real people making real decisions based on what they see online. So, understanding how we, as users, evaluate the trustworthiness of these videos is paramount. It’s not just about what catches our eye, but how our brains process these quick bursts of information and decide what to believe.

Researchers have explored many factors that influence our susceptibility to misleading health information. For instance, our personal traits, like how good we are at critical thinking or how much we know about health, play a huge role. Our emotions and desire for easy answers can also make us more vulnerable to compelling but false stories. Beyond our individual minds, the content itself—its storytelling, the perceived credibility of the source, and even the video’s production quality—all weigh into our judgments. And let’s not forget the “social proof”: the number of likes, shares, or comments on a video can subconsciously influence us, sometimes more than actual facts. This suggests that judging credibility isn’t a simple, straightforward process; it’s a mix of careful thought and quick, intuitive reactions, much like how we often make decisions in everyday life.

However, despite these insights, there were still some big questions lingering. Most previous research looked at health misinformation in text-based social media or online communities, not the unique, visually-driven world of short videos. Plus, many studies treated factors in isolation, ignoring the complex interplay between them. In reality, our judgments are shaped by a combination of our personal biases, the video’s characteristics, and the social buzz around it, all at once. And while theories can explain how we process information, they don’t always translate into practical strategies for platforms and policymakers to actually tackle the problem. To fill these gaps, our study dove deep, using a combination of in-depth interviews, surveys, and advanced statistical modeling to uncover not only what factors influence our ability to spot health misinformation but also how these factors work together in different ways for different people.

We wanted to answer two main questions: First, what are the key things that help or hinder users in identifying health misinformation in short videos? And second, how can these different factors combine to create effective strategies for platforms and governments to address the spread of false health content? Our journey began with “listening” to people. We conducted extensive interviews with 47 individuals, ranging from young adults to seniors, asking them about their experiences with health short videos. We didn’t just ask them to recall past incidents; we actually showed them real health videos – some accurate, some misleading – and observed their thought processes as they decided what to trust. This allowed us to gather rich, qualitative data about how people truly think and feel when confronted with health information online. We meticulously analyzed over 150,000 words of interview transcripts, using a technique called grounded theory to identify recurring themes and patterns in their decision-making.

What we found was fascinating. The factors influencing our ability to discern health misinformation fell into three main buckets: the quality of the information itself, our personal characteristics as users, and the external environment surrounding the video. Within information quality, we looked at things like whether the information was reliable, if it made logical sense, how it was presented (narrative expression), and its overall structure. For user characteristics, we considered our psychological needs and our cognitive ability or health literacy. And for the external environment, we focused on how social cues like likes, comments, and shares affected our judgment. Our quantitative analysis, using data from 279 survey participants, confirmed that clarity and logical consistency of content, along with a good video structure, significantly help us spot misinformation. However, some surprises emerged: a highly polished or professional-sounding narrative style actually made it harder for people to discern truth from falsehood, suggesting that a slick presentation can sometimes mask misleading content. Our personal cognitive ability also played a crucial role, while our emotional needs, surprisingly, didn’t directly improve our ability to identify misinformation, even though they might make us more willing to believe something. The external environment, meaning what others are saying and doing with the video, also significantly influenced our judgments.

Crucially, our study went beyond simply identifying individual factors; we explored how they interact and combine in different ways for different people. We used a special method called fuzzy-set qualitative comparative analysis (fsQCA) to uncover these “causal configurations.” We discovered three distinct pathways people take when evaluating health short videos, largely mirroring the Elaboration Likelihood Model (ELM), a well-known theory in psychology. First, there’s the “primarily analytical evaluation” pathway, typical of users with higher cognitive abilities. These individuals tend to deeply analyze a video’s content, focusing on its logic, narrative, and structure. Second, we identified the “peripheral reliance on content cues,” where users, who might have moderate cognitive levels, lean heavily on superficial aspects like a professional-looking presentation or an engaging narrative style, even if the underlying logic is weak. Finally, there’s the “peripheral reliance on cognitive cues,” where users, perhaps with lower cognitive abilities, are more swayed by external factors like social endorsement (lots of likes or positive comments) and the general presentation, using these as shortcuts to assess credibility. These findings underscore that there’s no “one-size-fits-all” approach to misinformation discernment.

These insights are incredibly valuable for developing practical strategies to combat health misinformation. Our research suggests a multi-layered approach, involving individuals, platforms, and policymakers. At the individual level, it’s about empowering different types of users: providing advanced analytical tools for those who already think critically, offering personalized education and literacy training for those who could benefit from more guidance, and creating safer, more curated information environments for those most susceptible to peripheral cues. For platforms, the responsibility lies in leveraging technology. This means developing advanced AI to automatically detect misleading content, prioritizing high-quality health information in recommendation algorithms, and fostering responsible comment sections where rational discussions and expert insights are highlighted over sensationalism. At the highest level, governments and regulatory bodies have a role in setting clear industry standards for health content, holding content creators and platforms accountable, and integrating health information literacy into national education systems. By working together across these levels, we can create a more informed and resilient digital health ecosystem, helping everyone navigate the complex world of online health information with greater confidence and accuracy. While our study offered significant contributions, it’s important to acknowledge that our findings are primarily based on a Chinese context and self-reported user data, suggesting avenues for future research to expand to diverse cultural settings and incorporate objective behavioral data.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Beyond the Spectrum: Campaign uses milk cartons to combat misinformation about autism in Brazil

Anwar Warns Against Misinformation On Oil Prices Amid West Asia Conflict

Anwar slams misinformation on oil price hike, urges fact-base discourse

Search Gets Smarter Yet Less Verifiable? Study Flags Accuracy In Google AI Overviews

2027: Amupitan Calls For Stricter Media Guidelines To Tackle Election Misinformation – Arise News

How Misinformation Can Affect Professionals: Learning From

Editors Picks

News anchor makes false claim about commissioning status of Kugbo Bus Terminal

April 9, 2026

Chinese military slams ‘disinformation’ on claims of supplies to Iranian military, satellite images of US bases

April 9, 2026

Social media fuelled Southport misinformation, UK home secretary says

April 9, 2026

James Gunn breaks silence after false ‘Superman’ casting report

April 9, 2026

Understanding how users identify health misinformation in short videos: an integrated analysis using PLS-SEM and fsQCA

April 9, 2026

Latest Articles

Russian Disinformation Amounts To ‘State Of War’, U.K. Lawmakers Warn

April 9, 2026

Gravenhurst man sentenced for arson and false emergency reports

April 9, 2026

Anwar Warns Against Misinformation On Oil Prices Amid West Asia Conflict

April 9, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.