Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Chicken & Eggs: Affordable Protein, Curb Misinformation: Rediff Moneynews

April 25, 2026

Most doctors say they’ve had to intervene after patients accessed misinformation, survey finds

April 25, 2026

Pahalgam false flag: India fails to substantiate allegations against Pakistan

April 25, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Disinformation
Disinformation

OpenAI’s Sora Makes Disinformation Extremely Easy and Extremely Real

News RoomBy News RoomApril 25, 20265 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Imagine a world where what you see isn’t necessarily what’s real. A world where a simple text prompt can conjure up a video so convincing, it makes you question everything. That’s the powerful, and frankly, a bit terrifying, reality that a new app called Sora, from the creators of ChatGPT, has ushered in. In just its first few days, users were already creating incredibly lifelike videos of things that never happened: ballot fraud, immigration arrests, protests, even crimes and attacks on city streets. It’s like having a Hollywood special effects studio at your fingertips, but without any of the ethical oversight. You can even upload your own image and voice, placing yourself into imaginary scenarios, or bring deceased celebrities back to life in a digital form. While the thrill of such creative power is undeniable, experts are sounding the alarm, warning that Sora and similar tools could become dangerous breeding grounds for misinformation and abuse.

For years, we’ve been grappling with the challenge of distinguishing real from fake online, from doctored photos to cleverly written hoaxes. But Sora takes this to a whole new level. Its ability to generate hyper-realistic videos makes it incredibly easy to produce content that’s not just misleading, but outright fabricated. And these aren’t just silly, obviously fake videos; they’re shockingly convincing. Think about the potential consequences: a video circulating of a conflict escalating, consumers being defrauded by believable but false product claims, elections being swayed by manufactured narratives, or even innocent people being framed for crimes they didn’t commit, all based on something that never truly happened. Hany Farid, a computer science professor at UC Berkeley, perfectly captures this anxiety, expressing deep worry for consumers, for our democracy, our economy, and our fundamental institutions. The very fabric of trust in what we see and hear is at stake.

OpenAI acknowledges these concerns, stating they’ve released Sora after extensive safety testing and have implemented certain safeguards. They’ve put in place usage policies that forbid misleading others through impersonation, scams, or fraud, and claim to take action when misuse is detected. The New York Times, in their own tests, found that Sora did refuse to generate imagery of famous people without their permission and declined prompts asking for graphic violence. It even said no to some political content. OpenAI themselves, in a document accompanying Sora’s debut, acknowledged the “important concerns around likeness, misuse, and deception” that such hyperrealistic video and audio capabilities raise, emphasizing a “thoughtful and iterative approach in deployment to minimize these potential risks.” It sounds reassuring on the surface, a responsible approach to a powerful technology.

However, these safeguards aren’t as foolproof as one might hope. For instance, Sora, currently an invitation-only app, doesn’t require users to verify their accounts, meaning someone could easily sign up with a fake name and profile. While tests showed it rejected attempts to create AI likenesses of famous people from uploaded videos, it surprisingly had no issue generating content featuring children or long-dead public figures like Martin Luther King Jr. and Michael Jackson. And while it wouldn’t create videos of President Trump or other world leaders, a request for a political rally with attendees holding “blue and holding signs about rights and freedoms” inexplicably resulted in a video featuring the unmistakable voice of former President Barack Obama. These inconsistencies highlight the inherent challenges in fully controlling the outputs of such a sophisticated AI, revealing cracks in the protective barriers designed to prevent misuse.

Until now, despite the ease of editing photos and text, videos had a certain weight as evidence of actual events. That final bastion of credibility is now crumbling. Sora’s high-quality videos introduce a terrifying “liar’s dividend,” where exceptionally realistic AI-generated content can lead people to dismiss authentic content as fake. Even with a moving watermark identifying Sora videos as AI creations, experts warn that these can be removed with relative ease. Lucas Hansen, founder of CivAI, a nonprofit studying AI’s dangers, laments that “almost no digital content can be used to prove that anything in particular happened.” This erosion of trust is exacerbated by the way content is often consumed – in fast, endless scrolls, where quick impressions trump rigorous fact-checking. This environment is ripe for the spread of propaganda and sham evidence, making it terrifyingly easy to fuel conspiracy theories, falsely implicate innocent individuals, or inflame already volatile situations.

The implications are far-reaching and deeply unsettling. While Sora might refuse overtly violent prompts, it readily depicted convenience store robberies and home intrusions, even creating videos of bombs exploding on city streets – content that, though fictional, can easily mislead the public about real-world conflicts. In an age where fake footage has already saturated social media during previous wars, Sora elevates the risk by allowing tailor-made, highly persuasive content to be delivered by sophisticated algorithms to receptive audiences. Kristian J. Hammond, a professor at Northwestern University, rightly points out that this only amplifies the “balkanized realities” we already live in, where individuals are fed content that reinforces their existing beliefs, even if those beliefs are false. Even experts like Dr. Farid, who dedicates his company to spotting fabricated images, now struggle to distinguish real from fake at first glance. He, who once could identify artifacts confirming his visual analysis, admits, “I can’t do that anymore.” This admission from a leading expert is a stark, chilling reminder of the profound shift Sora represents, forcing us all to re-evaluate how we perceive the digital world and the fragile foundation of truth within it.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Opinion | Blinding the world with lies makes peacemaking an impossible task

Influencers Fight Fake News | Institute for War and Peace Reporting

BTA :: Austrian President Warns about Risks from AI-Generated Disinformation

Armenia faces renewed disinformation attack featuring fake Trump post on Iran

Putin is accustoming Russians to life under a “digital iron curtain” – Center for Countering Disinformation

Russia poisoning European societies with anti-Ukrainian disinformation – MEP Gregorova

Editors Picks

Most doctors say they’ve had to intervene after patients accessed misinformation, survey finds

April 25, 2026

Pahalgam false flag: India fails to substantiate allegations against Pakistan

April 25, 2026

Austria President Alexander Van der Bellen Warns of Misinformation, Targets AI Fake News in Political Speech

April 25, 2026

OpenAI’s Sora Makes Disinformation Extremely Easy and Extremely Real

April 25, 2026

Researcher warns of coming wave of AI health misinformation – Pharmacy Today

April 25, 2026

Latest Articles

Stolen Painting – at Zelensky’s: Fake BBC Story

April 25, 2026

Opinion | Blinding the world with lies makes peacemaking an impossible task

April 25, 2026

The health misinformation crisis is bigger than anyone thought: Most people worldwide believe at least one of six common medical myths

April 25, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.