Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Indian state’s proposed misinformation law opens door to criminalizing press 

July 7, 2025

How AI-Powered Disinformation Could Ignite a Nuclear Crisis in South Asia

July 7, 2025

Aos Fatos turns ten in the trenches for democracy

July 7, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

Stanford AI Expert’s Testimony Undermined by Fabricated AI-Generated Sources, Judge Rules

News RoomBy News RoomJanuary 15, 2025Updated:January 16, 20254 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

AI Expert’s Testimony Tossed After Citing AI-Fabricated Research

In a case brimming with irony, a Stanford University professor specializing in artificial intelligence and misinformation had his expert testimony dismissed by a federal court judge after it was revealed he had inadvertently included fabricated information generated by an AI chatbot. Professor Jeff Hancock, founding director of the Stanford Social Media Lab, was retained by the Minnesota Attorney General’s office to provide expert testimony in defense of the state’s law criminalizing AI-generated “deepfake” election-related images. The lawsuit contesting the law was brought by a state legislator and a satirist YouTuber. However, Hancock’s reliance on an AI chatbot to assist in preparing his declaration led to the inclusion of fabricated research and citations, ultimately undermining his credibility and leading to the dismissal of his testimony.

Minnesota District Court Judge Laura Provinzino expressed the irony of the situation, noting that Hancock, an expert on the dangers of AI and misinformation, had himself fallen victim to those very dangers. Hancock’s extensive research on irony further amplified the peculiarity of the circumstances. The judge highlighted the importance of verifying AI-generated content, emphasizing that relying solely on such technology without exercising critical thinking and independent judgment can have detrimental effects on the legal profession and the court’s decision-making process.

The errors in Hancock’s declaration came to light when lawyers for the plaintiffs discovered a cited study that did not exist, authored by fabricated names, and likely generated by an AI large language model like ChatGPT. Hancock admitted to using ChatGPT 4.0 to aid in his research, explaining that the errors likely arose from the chatbot’s misinterpretation of the word "cite" as an instruction to generate fictitious citations. He acknowledged responsibility for the errors, which also included incorrect author attributions for legitimate research, and apologized to the court.

Judge Provinzino acknowledged Hancock’s qualifications as an expert on AI and deepfakes but stated that the inclusion of the fabricated information, despite Hancock’s explanation, irrevocably damaged his credibility. The judge emphasized the importance of reliability in expert testimony and pointed out the wasted time and resources incurred by the opposing party due to the flawed submission. While the Minnesota Attorney General’s office sought to submit a corrected version of Hancock’s testimony, the judge remained firm in her decision to dismiss it entirely.

The incident highlights a growing concern regarding the use of AI chatbots in professional settings, particularly in the legal field. While these tools offer the potential to revolutionize legal practice, the potential for generating false information, often referred to as "hallucinations," poses a significant risk. Hancock’s case serves as a cautionary tale, underscoring the need for careful verification and scrutiny of AI-generated content. The judge’s ruling serves as a reminder that while AI can be a valuable tool, it cannot replace human judgment and critical thinking.

This incident is not isolated. In 2023, two lawyers faced fines for submitting legal filings containing fake case citations generated by ChatGPT, demonstrating the growing pervasiveness of this issue within the legal profession. As the use of AI chatbots expands across various fields, the need for clear guidelines and safeguards against the dissemination of misinformation becomes increasingly critical. Judge Provinzino’s ruling adds to a rising chorus of legal professionals and academics advocating for the responsible and ethical use of AI technology, highlighting the importance of verification and the exercise of independent professional judgment. The case also raises questions about the potential liability and professional consequences for individuals who rely on AI-generated content without proper verification, particularly in high-stakes situations like legal proceedings. As AI technology continues to evolve, the legal and ethical implications of its use will undoubtedly continue to be scrutinized and debated.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Viral band success spawns AI claims and hoaxes

How to spot AI-generated newscasts – DW – 07/02/2025

Fake news in the age of AI

AI chatbots could spread ‘fake news’ with serious health consequences

Fake, AI-generated videos about the Diddy trial are raking in millions of views on YouTube | Artificial intelligence (AI)

Meta Denies $100M Signing Bonus Claims as OpenAI Researcher Calls It ‘Fake News’

Editors Picks

How AI-Powered Disinformation Could Ignite a Nuclear Crisis in South Asia

July 7, 2025

Aos Fatos turns ten in the trenches for democracy

July 7, 2025

Pavlo Kyrylenko Charged with Illegal Enrichment and False Declarations | Ukraine news

July 7, 2025

New campaign asks young people to help their parents recognize misinformation » Yale Climate Connections

July 7, 2025

After Pakistan’s Rafale kill claims, China launched a disinformation blitz- The Week

July 7, 2025

Latest Articles

Misinformation lends itself to social contagion – here’s how to recognize and combat it

July 7, 2025

China Ran Disinformation Campaign Against Rafale Jets After India-Pakistan Clash: French Report – SOFX

July 7, 2025

Deoria Police to Act Against Fake News on Muharram Slogans

July 7, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.