Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Reports of Luis Diaz wanting to leave Liverpool are ‘false’

June 3, 2025

AI, Propaganda and Misinformation – SFU Public Square

June 3, 2025

Bulgaria experiences disinformation and fear ahead on ruling on the euro

June 3, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Stanford ‘Misinformation’ Expert Falls Victim to Embarrassing AI Mistake

News RoomBy News RoomDecember 4, 20243 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Stanford Expert’s AI Misstep Raises Concerns in Legal Case Against Deepfakes

In a surprising revelation, Professor Jeff Hancock, a well-regarded authority on misinformation and the founder of the Stanford Social Media Lab, has admitted to utilizing artificial intelligence (AI) to create fabricated evidence in a federal court case. Hancock was enlisted by Minnesota Attorney General Keith Ellison to support a state law that penalizes election-related deepfakes. However, his expert declaration, which included parts generated by ChatGPT, was found to contain false information, leading to serious implications for the legal proceedings. This incident has raised alarm bells regarding the reliability of AI-generated content, particularly in sensitive contexts like legal testimony.

The plaintiffs contesting the Minnesota law include conservative content creator Christopher Kohls, who is known for his spoof videos, and Republican Minnesota Rep. Mary Franson. They argue that the law, revised in 2024, unlawfully restricts free speech. The plaintiffs’ legal team flagged Hancock’s declaration for referencing a fictitious study authored by "Huang, Zhang, and Wang," prompting suspicions that Hancock relied on AI capabilities to draft parts of the 12-page document. As the legal battle unfolded, concerns about the accuracy of Hancock’s claims mounted, leading to calls for the dismissal of his declaration, which was seen as riddled with potential misinformation.

During the scrutiny, Hancock acknowledged that his declaration contained two additional instances of AI-generated "hallucinations," presenting misleading text and nonsensical visuals. The AI’s fabrications were not limited to concocted studies; it also created a nonexistent article attributed to made-up authors. In defense of his actions, Hancock emphasized his extensive expertise and the broad research he has conducted on misinformation and its psychological implications. He claimed he used ChatGPT to assist with his research, and that the AI’s generation of false citations occurred inadvertently during his attempts to produce legitimate academic references.

Despite Hancock’s explanations, the plaintiffs’ attorneys accused him of perjury for having sworn to the accuracy of his sources, which were ultimately found to be fabricated. While Hancock maintained that these discrepancies did not undermine the scientific evidence or his opinions, the incident has fueled ongoing debates about the role of AI in academia and the legal system. A hearing is set for December 17 to address the validity of Hancock’s expert declaration and its potential ramifications on the ongoing case against the Minnesota law.

The fallout from Hancock’s admission calls into question broader issues concerning the use of AI in professional settings, particularly in the legal field. Notably, Hancock’s predicament is part of a troubling trend, as another legal case recently surfaced involving New York attorney Jae Lee. Lee faced disciplinary consequences after citing a fabricated case generated by ChatGPT in a medical malpractice lawsuit. This incident further underscores the risks associated with AI’s infiltration into serious professional domains where accuracy is essential.

As this case progresses, the response from Stanford University regarding possible disciplinary actions against Hancock is awaited. The implications of this incident may extend beyond Hancock himself, prompting further examination and potentially stricter regulations regarding AI’s role in producing reliable scholarship and expert testimony. The legal challenges posed by AI-generated material could lead to critical discussions about ethics, accountability, and the guidelines necessary to ensure the integrity of both legal and academic practices in an increasingly AI-dependent world.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

AI, Propaganda and Misinformation – SFU Public Square

Misinformation circulating, lot of work to be done in Washington: Shashi Tharoor

TTCB to hold media briefing to clarify ‘misinformation’ | Local Sports

Hey chatbot, is this true? AI 'factchecks' sow misinformation – Northeast Mississippi Daily Journal

Experts seek end to misinformation on vaccines

TikTok’s top mental health videos are riddled with misinformation

Editors Picks

AI, Propaganda and Misinformation – SFU Public Square

June 3, 2025

Bulgaria experiences disinformation and fear ahead on ruling on the euro

June 3, 2025

No, Russia didn’t bomb a ‘pedo enclave’ in Ukraine

June 3, 2025

Misinformation circulating, lot of work to be done in Washington: Shashi Tharoor

June 3, 2025

Who Falls for Fake News Faster?

June 3, 2025

Latest Articles

TTCB to hold media briefing to clarify ‘misinformation’ | Local Sports

June 3, 2025

Bulgaria is close to joining the euro currency but faces disinformation and fear – thederrick.com

June 3, 2025

Hey chatbot, is this true? AI 'factchecks' sow misinformation – Northeast Mississippi Daily Journal

June 3, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.