Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

El Paso County launches a new public information map to track emergencies and combat misinformation – KOAA News 5

April 23, 2026

Letter: Suppressing debate isn’t in anybody’s interest – Chico Enterprise-Record

April 23, 2026

500 March to Cape Town Parliament to Reject Climate False Solutions

April 22, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

St. Pete woman accused of using AI to create fake suspect

News RoomBy News RoomNovember 3, 2025Updated:April 19, 20265 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

The rapid advancements in artificial intelligence are presenting a double-edged sword for society. While AI offers incredible potential for good, its accessibility, low cost, and a current lack of stringent regulation are creating fertile ground for misuse by individuals with ill intentions. The FBI has taken note, highlighting the urgent need for awareness and vigilance as law enforcement grapples with these unprecedented challenges. It’s a stark reality that criminals are quick to exploit new technologies, and AI is no exception. However, as one recent case in the Tampa Bay area vividly illustrates, authorities are also quickly learning to spot the fakes and navigate this evolving digital landscape, showing that while the tools may be new, the principles of justice remain constant.

One such unsettling incident involved Brooke Schinault of St. Petersburg, who found herself entangled with the law after allegedly attempting to deceive police with AI-generated evidence. On October 7th, Ms. Schinault contacted St. Petersburg police, reporting a home invasion and a physical assault. She even provided images as “proof.” Initially, the police took down her information and began their investigation. However, later that same day, Ms. Schinault called back, appending a more severe claim: she had also been sexually assaulted. This escalation of details, particularly after an initial report omitted such a serious accusation, raised immediate red flags for the female detective assigned to her case. It was when the detective scrutinized the provided photographs that everything changed. Her keen eye, developed through an understanding of emerging digital trends, immediately recognized something amiss with the image Ms. Schinault presented. This seemingly small detail unveiled a much larger, and more troubling, deception.

Court documents later revealed the extent of Ms. Schinault’s alleged scheme. Investigators discovered digital evidence showing that the “suspect” photo she provided had been created using ChatGPT days before the reported incidents supposedly took place. Police explained that Ms. Schinault seemingly leveraged a recent TikTok trend, where people use AI to insert figures into photos of their living spaces, often as a playful prank. However, in this instance, what started as a seemingly harmless online fad turned into a serious criminal offense. Ms. Schinault allegedly used this AI trick to fabricate a non-existent suspect, attempting to manipulate a police investigation. St. Petersburg police confirmed they had encountered nothing like it before, underscoring the novelty and potential danger of such AI misuse. As Ashley Limardo, a public information specialist with the St. Petersburg Police Department, grimly noted, such actions are “very dangerous,” as they could lead to innocent individuals being wrongly implicated, diverting precious law enforcement resources. Ms. Schinault was subsequently arrested and charged with two counts of false reporting of a crime, currently out on bond. This case serves as a stark reminder that even seemingly harmless online trends can have severe real-world consequences when misused.

Dr. John Licato, a professor at the University of South Florida’s Bellini College of AI, Cybersecurity, and Computing, reflected on the broader implications of such incidents. He highlighted that while technology evolves, the presence of malicious actors remains constant. Dr. Licato, upon hearing Ms. Schinault’s story, pondered the motivation behind such an act, emphasizing the need for greater “AI literacy” among the general public. He argued that it’s crucial for people to understand the capabilities of AI to better arm themselves against potential deception. The St. Petersburg police, in this instance, demonstrated a form of AI literacy by recognizing the TikTok trend and understanding that AI could be used to create such fabricated images. This awareness, Dr. Licato stressed, is what enabled them to pivot the investigation away from a false lead. He further advised that people should actively engage with AI tools like ChatGPT themselves. This hands-on experience, he believes, is vital for understanding what is possible with the technology, thus making individuals less susceptible to being fooled and better equipped to identify misrepresentations.

Another unsettling case from Hillsborough County further highlights the darker side of AI misuse. Nineteen-year-old Sammarth Gautam was apprehended after transforming consensual social media photos of fully clothed girls he knew into AI-generated nude images, which he then posted online. In an interrogation video, Gautam confessed to detectives that his actions stemmed from curiosity about AI’s capabilities. “I know I shouldn’t have, but I kind of got curious, and I just wanted to use the technology to see what it could do,” he stated. For his actions, Gautam faced 16 counts of promoting altered sexual depictions without consent. He eventually accepted a plea deal, resulting in a 12-day jail sentence. Dr. Licato commented that while the underlying technology for such image manipulation isn’t entirely new, its increasing sophistication and accessibility through AI tools are accelerating. This rapid advancement, he noted, forces society to confront fundamental questions about acceptable use and the necessary regulatory frameworks to protect individuals. He drew an analogy to vehicle regulations, suggesting that a balanced approach of restrictions and guidance is needed for new technologies like AI to ensure public safety and ethical use.

The cases of Brooke Schinault and Sammarth Gautam serve as potent human examples of the urgent need for a societal reckoning with artificial intelligence. These instances, though distinct in their motives and outcomes, underscore the vulnerability that arises when powerful technology meets human fallibility or malevolence. Ms. Schinault’s alleged attempt to fabricate a crime and Mr. Gautam’s violation of trust through AI-generated imagery highlight the critical importance of digital literacy, not just for law enforcement, but for every individual. As AI continues to evolve, the line between reality and simulation will become increasingly blurred, making it imperative for us to understand these tools, their potential, and their limitations. Without a collective effort to educate ourselves and develop robust ethical and legal safeguards, we risk a future where distinguishing truth from fiction becomes an ever-more challenging and dangerous endeavor, potentially upending the very foundations of trust and justice in our communities.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

USP students create an AI chatbot that detects fake news on WhatsApp in seconds and also wins an international award with innovative technology.

Why FG must criminalize fake AI-generated contents against political leaders – Coalition

Top MAGA influencer Emily Hart revealed to be AI — created by a guy in India

What is Emily Hart AI scam? How a fake MAGA influencer made thousands of dollars

Kremlin Used AI, Fake Author to Pass Off Propaganda

Network of YouTube channels pushing U.S. annexation and Alberta secession narrative, report finds – CTV News

Editors Picks

Letter: Suppressing debate isn’t in anybody’s interest – Chico Enterprise-Record

April 23, 2026

500 March to Cape Town Parliament to Reject Climate False Solutions

April 22, 2026

Canadian immigration minister's sporadic communication about new permanent resident program under fire – Toronto Star

April 22, 2026

Ukraine Offers Help to Latvia, Lithuania, and Estonia in Tackling Russian Disinformation, Sybiha Says — UNITED24 Media

April 22, 2026

USP students create an AI chatbot that detects fake news on WhatsApp in seconds and also wins an international award with innovative technology.

April 22, 2026

Latest Articles

Crowd Drawn to Warren Township Committee Meeting Following Misinformation on Bardy Farms – TAPinto

April 22, 2026

Armenia faces intensifying disinformation campaigns, says Prime Minister’s spox

April 22, 2026

Eight Iranian women won’t be executed: Trump

April 22, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.