Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Vittal: Police file case against private web news portal for spreading false information

July 13, 2025

‘We’re in various stages of grief and still trying to make sense of what just happened’

July 13, 2025

Misinformation is already a problem during natural disasters in Texas. AI chatbots aren't helping – The Daily Gazette

July 13, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Expert’s Credibility Compromised by Citation of Fabricated and AI-Generated Sources

News RoomBy News RoomJanuary 11, 20253 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

AI Misinformation Expert Falls Prey to AI Hallucination in Minnesota Deepfake Lawsuit

A legal challenge to Minnesota’s law prohibiting the malicious use of deepfakes in political campaigns has taken an ironic turn, highlighting the very dangers the law seeks to address. Jeff Hancock, a Stanford University professor and expert on AI and misinformation, submitted an expert declaration supporting the state’s defense of the law, only to have it revealed that the declaration contained citations to non-existent academic articles generated by the AI chatbot GPT-4. This incident underscores the growing concern over the reliability of AI-generated content in legal proceedings and the crucial need for rigorous verification.

The case centers on Minnesota’s statute prohibiting the dissemination of deepfakes – manipulated videos or images that appear authentic – with the intent to harm a political candidate or influence an election. Plaintiffs argue that the law infringes on First Amendment rights and have sought a preliminary injunction to prevent its enforcement. Minnesota Attorney General Keith Ellison, in defending the law, submitted expert declarations, including one from Professor Hancock, to underscore the potential threat deepfakes pose to free speech and democratic processes.

However, the defense’s case was undermined when it came to light that Professor Hancock’s declaration contained fabricated citations. Attorney General Ellison acknowledged the errors, attributing them to Professor Hancock’s reliance on GPT-4 for drafting assistance. Professor Hancock admitted to using the AI tool and failing to verify the generated citations before submitting the declaration under penalty of perjury. While maintaining the accuracy of the substantive arguments in his declaration, the presence of fabricated citations severely damaged his credibility.

The incident has drawn attention to the increasing use of AI in legal research and writing, and the potential pitfalls associated with unchecked reliance on such tools. While acknowledging AI’s potential to revolutionize legal practice and improve access to justice, the court emphasized the critical importance of maintaining human oversight and critical thinking. The judge explicitly warned against abdicating independent judgment in favor of AI-generated content, highlighting the potential for such reliance to negatively impact the quality of legal work and judicial decision-making.

This case echoes a growing number of instances where AI-generated inaccuracies have disrupted legal proceedings. Courts across the country have issued sanctions and rebukes to attorneys who submitted filings containing fabricated citations generated by AI. The Minnesota court joined this chorus, emphasizing the non-delegable responsibility of attorneys to ensure the accuracy of all submitted materials, particularly those signed under penalty of perjury.

In light of the compromised credibility of the original declaration, the court rejected Attorney General Ellison’s request to file an amended version. While acknowledging Professor Hancock’s expertise on AI and misinformation, the judge deemed the damage irreparable. The court stressed the importance of trust in declarations made under oath and the detrimental impact of false citations on the integrity of legal proceedings. The incident serves as a stark reminder of the critical need for vigilance and verification when utilizing AI tools in legal contexts. The court urged attorneys to implement procedures to verify AI-generated content, including explicitly inquiring about the use of AI in the drafting process of witness declarations. This case stands as a cautionary tale, highlighting the potential consequences of over-reliance on AI and underscoring the enduring importance of human judgment and rigorous fact-checking in the legal profession.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Misinformation is already a problem during natural disasters in Texas. AI chatbots aren't helping – The Daily Gazette

Why We Identify With Deadly Misinformation – Byline Times

Iran Embassy In India Flags ‘Fake Channels’ Spreading Misinformation To Harm Ties | India News

X’s Community Notes: Over 90% Go Unnoticed, Raising Questions About Their Effectiveness in Combating Misinformation

Did cloud seeding cause Texas floods? Misinformation spreads as severe flooding strikes US

Kanwar Yatra Misinformation: UP Police Book X Handle Over Fake Vandalism Claims – Deccan Herald

Editors Picks

‘We’re in various stages of grief and still trying to make sense of what just happened’

July 13, 2025

Misinformation is already a problem during natural disasters in Texas. AI chatbots aren't helping – The Daily Gazette

July 13, 2025

Lawyer disbarred over false police report

July 12, 2025

Tucker Carlson’s interview with Pezeshkian was used to spread disinformation.

July 12, 2025

Children’s Trust Escambia County Commissioners at odds over taxes

July 12, 2025

Latest Articles

Why We Identify With Deadly Misinformation – Byline Times

July 12, 2025

Researchers warn of manipulation of recall information

July 12, 2025

‘REALLY GOOD EXERCISE,’ RODGERS POSITIVE AFTER KT’S FALSE START

July 12, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.