Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Astronomer CEO Andy Byron’s alleged affair at Coldplay concert sparks viral misinformation

July 18, 2025

المرصد السوري لحقوق الانسان:مقتل 48 من القوات الموالية لدمشق في ضربات تركية بسوريا

July 18, 2025

PM Kisan 20th Installment Date: Agriculture Ministry Cautions Farmers About Misinformation | Economy News

July 18, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Expert’s Credibility Compromised by Citation of Fabricated and AI-Generated Sources

News RoomBy News RoomJanuary 11, 20253 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

AI Misinformation Expert Falls Prey to AI Hallucination in Minnesota Deepfake Lawsuit

A legal challenge to Minnesota’s law prohibiting the malicious use of deepfakes in political campaigns has taken an ironic turn, highlighting the very dangers the law seeks to address. Jeff Hancock, a Stanford University professor and expert on AI and misinformation, submitted an expert declaration supporting the state’s defense of the law, only to have it revealed that the declaration contained citations to non-existent academic articles generated by the AI chatbot GPT-4. This incident underscores the growing concern over the reliability of AI-generated content in legal proceedings and the crucial need for rigorous verification.

The case centers on Minnesota’s statute prohibiting the dissemination of deepfakes – manipulated videos or images that appear authentic – with the intent to harm a political candidate or influence an election. Plaintiffs argue that the law infringes on First Amendment rights and have sought a preliminary injunction to prevent its enforcement. Minnesota Attorney General Keith Ellison, in defending the law, submitted expert declarations, including one from Professor Hancock, to underscore the potential threat deepfakes pose to free speech and democratic processes.

However, the defense’s case was undermined when it came to light that Professor Hancock’s declaration contained fabricated citations. Attorney General Ellison acknowledged the errors, attributing them to Professor Hancock’s reliance on GPT-4 for drafting assistance. Professor Hancock admitted to using the AI tool and failing to verify the generated citations before submitting the declaration under penalty of perjury. While maintaining the accuracy of the substantive arguments in his declaration, the presence of fabricated citations severely damaged his credibility.

The incident has drawn attention to the increasing use of AI in legal research and writing, and the potential pitfalls associated with unchecked reliance on such tools. While acknowledging AI’s potential to revolutionize legal practice and improve access to justice, the court emphasized the critical importance of maintaining human oversight and critical thinking. The judge explicitly warned against abdicating independent judgment in favor of AI-generated content, highlighting the potential for such reliance to negatively impact the quality of legal work and judicial decision-making.

This case echoes a growing number of instances where AI-generated inaccuracies have disrupted legal proceedings. Courts across the country have issued sanctions and rebukes to attorneys who submitted filings containing fabricated citations generated by AI. The Minnesota court joined this chorus, emphasizing the non-delegable responsibility of attorneys to ensure the accuracy of all submitted materials, particularly those signed under penalty of perjury.

In light of the compromised credibility of the original declaration, the court rejected Attorney General Ellison’s request to file an amended version. While acknowledging Professor Hancock’s expertise on AI and misinformation, the judge deemed the damage irreparable. The court stressed the importance of trust in declarations made under oath and the detrimental impact of false citations on the integrity of legal proceedings. The incident serves as a stark reminder of the critical need for vigilance and verification when utilizing AI tools in legal contexts. The court urged attorneys to implement procedures to verify AI-generated content, including explicitly inquiring about the use of AI in the drafting process of witness declarations. This case stands as a cautionary tale, highlighting the potential consequences of over-reliance on AI and underscoring the enduring importance of human judgment and rigorous fact-checking in the legal profession.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Astronomer CEO Andy Byron’s alleged affair at Coldplay concert sparks viral misinformation

PM Kisan 20th Installment Date: Agriculture Ministry Cautions Farmers About Misinformation | Economy News

In exam rooms, Maine physicians confront health care issues — and misinformation

Debunked: Amid reports of a cryptic Trump-Epstein letter, misinformation muddies the waters – The Journal

Misinformation fuels anti-migrant riots in Spain – DW – 07/18/2025

New Indiana absenteeism law clarified amidst misinformation

Editors Picks

المرصد السوري لحقوق الانسان:مقتل 48 من القوات الموالية لدمشق في ضربات تركية بسوريا

July 18, 2025

PM Kisan 20th Installment Date: Agriculture Ministry Cautions Farmers About Misinformation | Economy News

July 18, 2025

Disinformation catalyses anti-migrant unrest in Spain – Citizen Tribune

July 18, 2025

Health influencer spreads false claim that nicotine cures neurological disorders

July 18, 2025

Hungary’s ‘Church Arson’ Claims Echo Russian Disinformation Campaign, Kyiv Warns

July 18, 2025

Latest Articles

Disinformation catalyses anti-migrant unrest in Spain | National

July 18, 2025

How Latin America Must Prepare for Threat of Russian Disinformation Campaigns

July 18, 2025

In exam rooms, Maine physicians confront health care issues — and misinformation

July 18, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.