Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Japan AI-Generated Videos: Japanese public warn against AI disinformation targeting China – news.cgtn.com

April 27, 2026

Iran-US Peace Talks: Pakistan military takes disinfo drive to new level with fake news on negotiations

April 27, 2026

Ashu Reddy Responds Strongly After Fraud Complaint, Warns Against “False News”

April 27, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Misinformation Expert Acknowledges Use of AI-Generated Fake Citations in Minnesota Case

News RoomBy News RoomDecember 3, 20243 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In a troubling revelation, Jeff Hancock, a Stanford University expert specialized in misinformation, admitted to utilizing artificial intelligence (AI) to draft a court document that included multiple fabricated citations regarding AI itself. This incident emerged from a legal case concerning a new Minnesota law aimed at prohibiting the use of AI to deceive voters before elections. Hancock’s submission was scrutinized after opposing lawyers discovered the false citations generated by the AI tool, ChatGPT-4o, prompting them to file a motion to dismiss his declaration based on this misinformation.

Hancock, who charged the Minnesota Attorney General’s Office $600 per hour for his expertise, explained that the inclusion of the erroneous citations was an unintended outcome of using the AI. In a new court motion filed by the Attorney General’s Office, it was put forth that Hancock viewed these citations as “AI-hallucinated,” asserting that he had no intention of misleading the court or other legal counsel involved. Notably, the Attorney General’s Office only became aware of the fabricated citations after the opposing lawyers raised concerns, leading them to seek permission from the judge to allow Hancock to amend his declaration.

In defense of his actions, Hancock emphasized the increasingly prevalent role of generative AI tools like ChatGPT in academic research and documentation processes. He highlighted that such practices have become common, referencing AI’s incorporation into widely used applications such as Microsoft Word and Gmail for composing documents. However, this case raises significant ethical questions about the application of AI in legal contexts, especially in light of a recent ruling by a New York court stating that lawyers must disclose when AI is employed in expert opinions. This court had previously rejected a lawyer’s declaration upon discovering it contained AI-generated material.

Jeff Hancock, recognized for his scholarly contributions related to misinformation and technology, has published numerous papers on AI’s implications for communication. He disclosed that he used ChatGPT-4o to assist in compiling a literature survey on deep fakes and drafting his legal declaration. However, Hancock speculated that the AI misinterpreted his notes as directives to insert fictitious citations, highlighting the potential risks associated with the misuse of AI in professional settings.

The incident raises vital discussions on the ethical ramifications of AI integration in legal processes, particularly regarding the inadvertent introduction of misinformation. The case not only underscores the difficulty in verifying the integrity of AI-generated content but also suggests the need for clearer guidelines regarding AI’s use within the legal system. While Hancock’s expertise is noteworthy, the implications of this misstep call into question how artificial intelligence might influence future legal proceedings.

With Hancock’s extensive involvement as an expert witness in various court cases, the unanswered question remains whether AI was similarly utilized in those instances. As this story unfolds, it brings to light the necessity of establishing protocols that ensure the accuracy and authenticity of information presented in court, particularly as the lines between human and machine-generated content continue to blur in an increasingly digital world.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Journalist talks modern misinformation at Carmel lecture – Monterey Herald

Australians urged to “Have the Jab Chat” with their GP to help cut through vaccine misinformation

Feeling angry makes people more likely to share news from low-credibility sources

For Real, a Natural History of Misinformation

Governments must prioritise response to hybrid threats, says expert

Tinubu Digital Platform Launched to Counter Misinformation

Editors Picks

Iran-US Peace Talks: Pakistan military takes disinfo drive to new level with fake news on negotiations

April 27, 2026

Ashu Reddy Responds Strongly After Fraud Complaint, Warns Against “False News”

April 27, 2026

False-flag claims spark backlash after White House Correspondents’ Dinner shooting

April 27, 2026

Pak stance on Pahalgam false flag operation vindicated globally: Tarar

April 27, 2026

Bomb Threat at Ash Flat Wal Mart Found to be False

April 27, 2026

Latest Articles

MS NOW hosts call out left-wing false flag claims about WHCD shooting

April 27, 2026

Journalist talks modern misinformation at Carmel lecture – Monterey Herald

April 26, 2026

Australians urged to “Have the Jab Chat” with their GP to help cut through vaccine misinformation

April 26, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.