Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Galactic disinformation: Artemis II lunar mission draws flood of conspiracy theories

April 11, 2026

‘It could be you tackling misinformation’ Why the Lancashire Post is backing this media career campaign

April 11, 2026

South Korea’s President Lee Jae Myung Criticises Israel Amid Disinformation Row | THE DAILY TRIBUNE

April 11, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

Machines spot deepfake pictures better than humans, but people outperform AI in detecting deepfake videos News

News RoomBy News RoomFebruary 25, 2026Updated:March 31, 20267 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Oh, the digital world is getting wilder by the day, isn’t it? It feels like we’re constantly playing a game of “is it real or is it Memorex?” (showing my age, maybe!), especially with all these super-realistic fake faces and videos popping up online. It’s like a magician’s trick, but instead of rabbits from hats, it’s entire digital personalities, and sometimes, it’s hard to tell if it’s really them or just a clever illusion.

Recently, some clever folks at the University of Florida – a team of psychologists and computer scientists, no less – decided to really dig into this. They wanted to see who was better at spotting these fakes: us humans or the super-smart AI programs we’re building. And wouldn’t you know it, the results were a bit of a head-scratcher, showing us where we shine and where we still have some learning to do.

Let’s start with still pictures. Imagine a photo of someone’s face – looks perfectly normal, right? Well, it turns out that AI programs are incredibly good at sniffing out the fakes here. We’re talking a whopping 97% accuracy! They’re like digital bloodhounds, meticulously analyzing every pixel, every subtle imperfection that gives away a deepfake. Human participants, on the other hand? Not so much. When faced with these static images, we were basically guessing. It’s like the AI has a magnifying glass for every last detail, while we humans are just squinting. My own mother, God bless her, probably wouldn’t know a deepfake from a genuine smile in a photo, and it seems many of us are in the same boat when AI is involved. It just goes to show how incredibly sophisticated these fake images have become, and how our eyes, wonderful as they are, just aren’t built to catch those tiny digital discrepancies.

But here’s where the plot thickens, and where we, the squishy, imperfect humans, get to puff out our chests a little. When it came to videos – those moving, talking, expressive deepfakes – the tables turned dramatically. Suddenly, the mighty AI programs, which were so brilliant at still images, struggled something fierce. They were essentially back to guessing, just like we were with the photos. One minute they’re Sherlock Holmes, the next they’re Inspector Clouseau.

We humans, however, proved to be surprisingly adept. We were correctly identifying real and fake videos about two-thirds of the time. Think about that for a second. We’re not perfect, but we were miles ahead of the algorithms. What was our secret sauce? The researchers, like Dr. Brian Cahill, a psychologist on the team, believe it’s because videos offer a “richer context.” It’s not just a static image; it’s a dynamic performance. Our brains, honed over millions of years to understand and interpret human interaction, are incredibly good at picking up on those tiny, almost imperceptible inconsistencies. A slight jerk in movement that feels unnatural, a flicker in an expression that doesn’t quite match the words, a micro-pause in speech that throws off the rhythm – these are the subtle cues our brains latch onto. The AI, for all its computational power, just couldn’t quite put all those pieces together in a coherent, human-like way. It’s like trying to understand a complex dance from just a few still photos – you miss all the fluidity and the emotion.

This revelation is actually pretty important. As these deepfakes get more and more convincing, whether it’s a fake video of a politician saying something they never did, or a fabricated financial report, being able to tell real from fake is becoming less of a parlor trick and more of a critical skill for navigating our digital world. As Dr. Cahill so aptly put it, “The significant decisions that are made by individuals and governments need to be based on real and accurate information. We need to know if people can tell what’s real or not as the technology gets more sophisticated at fooling us.” Imagine a world where you can’t trust your own eyes when watching a news report or a viral video. It’s a scary thought, isn’t it?

The team, including brilliant minds like Dr. Didem Pehlivanoglu and Dr. Mengdi Zhu from the Florida Institute for National Security, and the study’s senior author, Dr. Natalie Ebner, meticulously crafted a huge collection of real and fake images and videos. They had static faces, talking faces – the whole gamut. Then, thousands of people were brought in to play the “reality judge.” After that, the very same images and videos were fed to the AI algorithms. It’s a really thorough way to compare apples to apples, or in this case, human brains to silicon brains.

The bottom line from all this research is quite clear: if you’re dealing with a still photo, trust the machines. They’ve got us beat. But when things start moving and talking, when there’s a dynamic human element, that’s where our instincts and our finely tuned human perception give us an edge. Cahill’s surprise at humans outperforming AI on videos really highlights this unexpected result. “But the videos have more cues, it’s a richer context. There’s more stuff for the human brain to pick up on,” he explained. Our brains are truly remarkable pattern-recognition machines, especially when it comes to social cues.

What’s even more fascinating is how our own personal quirks and mental states influenced our deepfake-detecting abilities. Unsurprisingly, people who scored high on analytical thinking – those folks who love a good puzzle and critically analyze everything – and those with strong internet skills were better at spotting the AI-generated videos. They’re probably the ones who read the fine print and double-check sources, so it makes sense. But here’s a kicker: people who reported being in a good mood actually performed worse! It might be because when we’re feeling positive and happy, we tend to be more trusting, more optimistic, and less inclined to question things. It’s a reminder that our emotions, as wonderful as they are, can sometimes be a bit of a blind spot when navigating a world full of digital trickery. Maybe a healthy dose of skepticism isn’t such a bad thing after all, especially after a particularly joyous day.

Of course, like any good scientific study, there are always caveats. This research was done in a controlled environment, using specific types of faces and videos. The real world, with its endless variety of online content, is far more complex. And let’s not forget that both deepfake technology and AI systems are evolving at lightning speed. What’s true today might not be true tomorrow. The balance of power between humans and machines in this digital arms race is constantly shifting.

The sobering reality, as the authors wisely point out, is that we all need to be more vigilant. The fake stuff online is getting so good that we can’t just passively consume information anymore. We have to become active, critical consumers. We don’t all need to be deepfake detection experts, but as Dr. Zhu reminds us, “But we do need to stay alert, question what we see and look for evidence to support it.” It’s about cultivating a healthy dose of skepticism, developing that critical eye, and remembering that just because something looks real, doesn’t mean it is. In essence, it’s about using our distinctly human intelligence to navigate a very un-human digital landscape, and never forgetting to ask: “Is this for real, or am I just getting fooled?”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Viral image of Tinubu, Sowore handshake is AI-generated

Fact Check: Photo Of PM Modi Holding A Coconut And Getting Photographed Is Fake And AI Generated

Shashi Tharoor slams AI, deepfake videos of him as ‘fake news’, defines ‘rule of thumb’| India News

Image claiming to show US airman rescued in Iran is fake. Here’s the proof

It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears | Emma Brockes

Fake AI videos of Artemis II’s moon flyby are going viral

Editors Picks

‘It could be you tackling misinformation’ Why the Lancashire Post is backing this media career campaign

April 11, 2026

South Korea’s President Lee Jae Myung Criticises Israel Amid Disinformation Row | THE DAILY TRIBUNE

April 11, 2026

AI Overviews, a mass misinformation provider on call 24/7​

April 11, 2026

Vizag Steel Employees Union slams Steel Ministry over ‘False’ replies on VRS dues

April 11, 2026

DIPLOMACY AND DEFENSE | COMMON SECURITY | FALSE FREEDOM OF SPEECH | ENERGOPROM-2026

April 11, 2026

Latest Articles

Media a target of Marcos Jr. health rumors too — disinformation researcher

April 11, 2026

Condemning the spread of misinformation

April 11, 2026

France 24 did not broadcast video report on disinfo against Pakistan

April 11, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.