Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

‘Blatant misinformation’: Social Security Administration email praising Trump’s tax bill blasted as a ‘lie’ | US social security

July 5, 2025

Udupi: Man Arrested for Allegedly Raping Woman Under False Promise of Marriage

July 5, 2025

Misinformation On Operation Sindoor, 2025 Bihar Elections & More

July 5, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Expert’s Credibility Compromised by Citation of Fabricated and AI-Generated Sources

News RoomBy News RoomJanuary 11, 20253 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

AI Misinformation Expert Falls Prey to AI Hallucination in Minnesota Deepfake Lawsuit

A legal challenge to Minnesota’s law prohibiting the malicious use of deepfakes in political campaigns has taken an ironic turn, highlighting the very dangers the law seeks to address. Jeff Hancock, a Stanford University professor and expert on AI and misinformation, submitted an expert declaration supporting the state’s defense of the law, only to have it revealed that the declaration contained citations to non-existent academic articles generated by the AI chatbot GPT-4. This incident underscores the growing concern over the reliability of AI-generated content in legal proceedings and the crucial need for rigorous verification.

The case centers on Minnesota’s statute prohibiting the dissemination of deepfakes – manipulated videos or images that appear authentic – with the intent to harm a political candidate or influence an election. Plaintiffs argue that the law infringes on First Amendment rights and have sought a preliminary injunction to prevent its enforcement. Minnesota Attorney General Keith Ellison, in defending the law, submitted expert declarations, including one from Professor Hancock, to underscore the potential threat deepfakes pose to free speech and democratic processes.

However, the defense’s case was undermined when it came to light that Professor Hancock’s declaration contained fabricated citations. Attorney General Ellison acknowledged the errors, attributing them to Professor Hancock’s reliance on GPT-4 for drafting assistance. Professor Hancock admitted to using the AI tool and failing to verify the generated citations before submitting the declaration under penalty of perjury. While maintaining the accuracy of the substantive arguments in his declaration, the presence of fabricated citations severely damaged his credibility.

The incident has drawn attention to the increasing use of AI in legal research and writing, and the potential pitfalls associated with unchecked reliance on such tools. While acknowledging AI’s potential to revolutionize legal practice and improve access to justice, the court emphasized the critical importance of maintaining human oversight and critical thinking. The judge explicitly warned against abdicating independent judgment in favor of AI-generated content, highlighting the potential for such reliance to negatively impact the quality of legal work and judicial decision-making.

This case echoes a growing number of instances where AI-generated inaccuracies have disrupted legal proceedings. Courts across the country have issued sanctions and rebukes to attorneys who submitted filings containing fabricated citations generated by AI. The Minnesota court joined this chorus, emphasizing the non-delegable responsibility of attorneys to ensure the accuracy of all submitted materials, particularly those signed under penalty of perjury.

In light of the compromised credibility of the original declaration, the court rejected Attorney General Ellison’s request to file an amended version. While acknowledging Professor Hancock’s expertise on AI and misinformation, the judge deemed the damage irreparable. The court stressed the importance of trust in declarations made under oath and the detrimental impact of false citations on the integrity of legal proceedings. The incident serves as a stark reminder of the critical need for vigilance and verification when utilizing AI tools in legal contexts. The court urged attorneys to implement procedures to verify AI-generated content, including explicitly inquiring about the use of AI in the drafting process of witness declarations. This case stands as a cautionary tale, highlighting the potential consequences of over-reliance on AI and underscoring the enduring importance of human judgment and rigorous fact-checking in the legal profession.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

‘Blatant misinformation’: Social Security Administration email praising Trump’s tax bill blasted as a ‘lie’ | US social security

Misinformation On Operation Sindoor, 2025 Bihar Elections & More

Young mother-of-two shares one piece of misinformation everyone needs to know about killer disease – after ‘piles’ turned out to be stage 3 bowel cancer

X brings AI into Community Notes to fight misinformation at scale humans can’t match

Misinformation campaign targets Armenian heritage preservation at UNESCO site in Türkiye

A running list of RFK Jr.’s controversies

Editors Picks

Udupi: Man Arrested for Allegedly Raping Woman Under False Promise of Marriage

July 5, 2025

Misinformation On Operation Sindoor, 2025 Bihar Elections & More

July 5, 2025

Young mother-of-two shares one piece of misinformation everyone needs to know about killer disease – after ‘piles’ turned out to be stage 3 bowel cancer

July 5, 2025

X brings AI into Community Notes to fight misinformation at scale humans can’t match

July 5, 2025

Misinformation campaign targets Armenian heritage preservation at UNESCO site in Türkiye

July 5, 2025

Latest Articles

Ryanair ‘false fire alarm’ leaves 18 people injured on plane to Manchester Airport

July 5, 2025

False report of shooting scatters crowd of thousands from downtown Spokane just as fireworks start

July 5, 2025

Panic broke out: Ryanair flight on Mallorca evacuated – 18 people injured by false alarm

July 5, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.