Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

From ATM closures to downed Sukhoi: Govt's fact-check team busts flood of Pakistani misinformation – Fortune India

May 9, 2025

2 more vloggers in Davao face ‘disinformation’ complaints

May 9, 2025

India-Pakistan conflict: PIB debunks seven instances of misinformation amid heightened tension; what it revealed

May 9, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

Stanford AI Expert’s Testimony Undermined by Fabricated AI-Generated Sources, Judge Rules

News RoomBy News RoomJanuary 15, 2025Updated:January 16, 20254 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

AI Expert’s Testimony Tossed After Citing AI-Fabricated Research

In a case brimming with irony, a Stanford University professor specializing in artificial intelligence and misinformation had his expert testimony dismissed by a federal court judge after it was revealed he had inadvertently included fabricated information generated by an AI chatbot. Professor Jeff Hancock, founding director of the Stanford Social Media Lab, was retained by the Minnesota Attorney General’s office to provide expert testimony in defense of the state’s law criminalizing AI-generated “deepfake” election-related images. The lawsuit contesting the law was brought by a state legislator and a satirist YouTuber. However, Hancock’s reliance on an AI chatbot to assist in preparing his declaration led to the inclusion of fabricated research and citations, ultimately undermining his credibility and leading to the dismissal of his testimony.

Minnesota District Court Judge Laura Provinzino expressed the irony of the situation, noting that Hancock, an expert on the dangers of AI and misinformation, had himself fallen victim to those very dangers. Hancock’s extensive research on irony further amplified the peculiarity of the circumstances. The judge highlighted the importance of verifying AI-generated content, emphasizing that relying solely on such technology without exercising critical thinking and independent judgment can have detrimental effects on the legal profession and the court’s decision-making process.

The errors in Hancock’s declaration came to light when lawyers for the plaintiffs discovered a cited study that did not exist, authored by fabricated names, and likely generated by an AI large language model like ChatGPT. Hancock admitted to using ChatGPT 4.0 to aid in his research, explaining that the errors likely arose from the chatbot’s misinterpretation of the word "cite" as an instruction to generate fictitious citations. He acknowledged responsibility for the errors, which also included incorrect author attributions for legitimate research, and apologized to the court.

Judge Provinzino acknowledged Hancock’s qualifications as an expert on AI and deepfakes but stated that the inclusion of the fabricated information, despite Hancock’s explanation, irrevocably damaged his credibility. The judge emphasized the importance of reliability in expert testimony and pointed out the wasted time and resources incurred by the opposing party due to the flawed submission. While the Minnesota Attorney General’s office sought to submit a corrected version of Hancock’s testimony, the judge remained firm in her decision to dismiss it entirely.

The incident highlights a growing concern regarding the use of AI chatbots in professional settings, particularly in the legal field. While these tools offer the potential to revolutionize legal practice, the potential for generating false information, often referred to as "hallucinations," poses a significant risk. Hancock’s case serves as a cautionary tale, underscoring the need for careful verification and scrutiny of AI-generated content. The judge’s ruling serves as a reminder that while AI can be a valuable tool, it cannot replace human judgment and critical thinking.

This incident is not isolated. In 2023, two lawyers faced fines for submitting legal filings containing fake case citations generated by ChatGPT, demonstrating the growing pervasiveness of this issue within the legal profession. As the use of AI chatbots expands across various fields, the need for clear guidelines and safeguards against the dissemination of misinformation becomes increasingly critical. Judge Provinzino’s ruling adds to a rising chorus of legal professionals and academics advocating for the responsible and ethical use of AI technology, highlighting the importance of verification and the exercise of independent professional judgment. The case also raises questions about the potential liability and professional consequences for individuals who rely on AI-generated content without proper verification, particularly in high-stakes situations like legal proceedings. As AI technology continues to evolve, the legal and ethical implications of its use will undoubtedly continue to be scrutinized and debated.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Modi Govt Fails to Encourage Nation by creating a fake AI video

Amer Ali Khan highlights threat of fake news, urges media to adapt to AI age

AI Polluting Bug Bounty Platforms with Fake Vulnerability Reports

AI is creeping into every space of our lives, experts caution

Why people are using AI to fake disabilities like Down syndrome online

Met Gala AI pics go viral: How not to get caught out – BBC

Editors Picks

2 more vloggers in Davao face ‘disinformation’ complaints

May 9, 2025

India-Pakistan conflict: PIB debunks seven instances of misinformation amid heightened tension; what it revealed

May 9, 2025

Explained | Pakistan’s ‘full-blown disinformation offensive’ around Operation Sindoor

May 9, 2025

Govt fact-checking unit swings into action in the wake of Operation Sindoor to highlight false claims

May 9, 2025

India’s alleged aggression, false propaganda lose global credibility: Sharmila Farooqi

May 9, 2025

Latest Articles

India-Pak Conflict: India thwarts Pak’s attempt to weaponise misinformation

May 9, 2025

Pak Launches Another Front, Targets Indian Civilians With Disinformation Attack

May 9, 2025

Pakistan Resorts to Misinformation After Indian Strikes; Public Advised to Verify & Report Fake Content –

May 9, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.