Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

We’ll Tackle Misinformation Against Tinubu, Our Party  – North West APC

April 17, 2026

Manipur’s Rumour Economy: How Disinformation Fuels Mob Violence

April 17, 2026

Cork racing tipster apologises for false claim that businessman threatened to kill him

April 17, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

AI fact‑checking works, but mostly for progressives | CU Boulder Today

News RoomBy News RoomApril 17, 20265 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In our increasingly digital world, where news travels at the speed of light and misinformation can spread like wildfire, a new challenge has emerged: how do we tell what’s true from what’s not? Social media platforms, the primary battlegrounds for this information war, are increasingly turning to artificial intelligence (AI) to help them spot and flag false news. But a recent study sheds light on a rather fascinating and, perhaps, unsurprising truth: these high-tech fact-checkers don’t work the same for everyone. It turns out our political leanings play a significant role in how we perceive and trust these efforts, whether they come from a complex algorithm or a diligent human.

Imagine this: you’re scrolling through your social media feed, and you stumble upon a news story. Maybe it’s about climate change, vaccines, or immigration – topics that often ignite heated debates. Beside it, there’s a little flag or note indicating that the story has been fact-checked. The crucial question is, whose word do you trust more: a sophisticated AI system, an expert human, or perhaps your gut feeling about the news source itself? Researchers from the Leeds School of Business, Northeastern University’s D’Amore‑McKim School of Business, and Temple University’s Fox School of Business dove deep into this very question. They weren’t just interested in “which is better,” but rather, “how do people decide who to trust?” They conducted two extensive online experiments in the U.S. and U.K. during the politically charged news cycles of 2020 and 2022, gathering insights from 370 active social media users. Their findings? While AI fact-checkers generally made people less likely to believe false news, this effect was most pronounced among progressive users. Conservatives, on the other hand, reacted much the same to both AI and human fact-checking, often prioritizing the reputation of the news source above all else. This suggests a fundamental divide in how different political groups approach information validation and trust.

Jason Thatcher, a professor of information systems at the Leeds School of Business and a co-author of the forthcoming paper in MIS Quarterly, explains this fascinating dynamic with an interesting observation: “People that are conservative trust humans because they’re predictable, they’re reliable, they’re familiar, whereas perhaps progressives trust the technology.” Think about it: for some, the tangible presence of a person, with their experience and potential accountability, offers a sense of comfort and credibility. For others, the perceived objectivity and computational power of an AI system might be more convincing. This isn’t just about facts; it’s about the deeply ingrained ways we build trust, shaped by our individual worldviews and political affiliations. This divide, Thatcher points out, is key to understanding why an AI fact-checker might hit the mark for some users but completely miss it for others. It’s a human element in the cold, logical world of algorithms.

To really get under the hood of this phenomenon, the research team, including Guohou Shan and Sunil Wattal, meticulously designed their experiments to mimic real-world social media interactions. They showed participants news posts carefully crafted to look like genuine content from platforms like Facebook or Reddit. These posts delved into those thorny, polarizing issues mentioned earlier, and critically, included a mix of both true and false information – just like the messy feed we all encounter daily. The researchers then cleverly introduced variations: some posts were flagged by an AI, some by a human fact-checker, and some not at all. They also manipulated the perceived reputation of the news source, ranging from highly credible to less trustworthy. Finally, and crucially, they asked participants about their political leanings, categorizing them as progressive or conservative. This allowed them to analyze how these different variables intersected with people’s political identities and ultimately influenced their beliefs and willingness to share information.

After each post, participants were asked a series of questions: How believable was this? Would you share it? Comment on it? This meticulous approach, duplicated across both the U.S. and U.K. during different news cycles, ensured that the findings weren’t just a fleeting snapshot but robust and consistent across varied cultural and political landscapes. The overarching revelation was clear: while AI fact-checkers did a better job than humans at making people less likely to believe false news, this success was largely confined to progressive users. Conservatives, consistently, showed little difference in their response to AI versus human fact checks, instead placing more weight on the source of the news itself. This finding is profound because it highlights that trustworthiness isn’t a universal concept; it’s deeply personal and politically charged. The study also revealed that the challenge of fact-checking escalates when false claims originate from well-known or trusted sources – a common tactic used by purveyors of disinformation – especially when human fact-checkers are involved.

Ultimately, as Thatcher eloquently puts it, fighting misinformation isn’t just about “getting the facts right.” It’s fundamentally about trust. “One fact-checking system is probably not going to work for everyone,” he concludes. This realization is incredibly important for social media platforms and policymakers. It means there’s no magic bullet, no single algorithm or human team that can universally solve the misinformation crisis. Instead, the solution likely lies in a multifaceted approach: offering multiple ways of presenting evidence, encouraging critical thinking about information sources, and empowering individuals to arrive at their own conclusions. In a world awash with information, the future of truth-telling might just depend on understanding human psychology and political identity as much as, if not more than, the advanced capabilities of artificial intelligence.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

We’ll Tackle Misinformation Against Tinubu, Our Party  – North West APC

Disney Proves Devoted Fans Wrong After Monorail Misinformation Spreads Online

Misinformation targets Bangladesh govt as it tackles deadly measles crisis

In Lebanon, the Israeli army accused of misinformation

The rise of the misinformation battlefield – Nikkei Asia

Fact-Check: Of Misinformation Around Noida Workers Protest, West Bengal Elections & More

Editors Picks

Manipur’s Rumour Economy: How Disinformation Fuels Mob Violence

April 17, 2026

Cork racing tipster apologises for false claim that businessman threatened to kill him

April 17, 2026

AI fact‑checking works, but mostly for progressives | CU Boulder Today

April 17, 2026

Registration open for virtual Global Summit on Disinformation

April 17, 2026

HC upholds acquittal in ‘false promise of marriage’ case, cites 7-yr relationship | Raipur News

April 17, 2026

Latest Articles

Iran is winning the propaganda war against Trump – brick by brick

April 17, 2026

Armenia, U.S. discuss democracy, religious freedom

April 17, 2026

Two senior Jupem officers to be charged over power abuse, false claims

April 17, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.