Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

40 years after Chernobyl, Stasi files reveal scale of Soviet misinformation

April 27, 2026

AI is Capturing Democracy with Fake Citizens, Scientists Warn

April 27, 2026

NBI summons 3 vloggers over false info on Marcos’ health

April 27, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Stanford ‘Misinformation’ Expert Falls Victim to Embarrassing AI Mistake

News RoomBy News RoomDecember 4, 20243 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Stanford Expert’s AI Misstep Raises Concerns in Legal Case Against Deepfakes

In a surprising revelation, Professor Jeff Hancock, a well-regarded authority on misinformation and the founder of the Stanford Social Media Lab, has admitted to utilizing artificial intelligence (AI) to create fabricated evidence in a federal court case. Hancock was enlisted by Minnesota Attorney General Keith Ellison to support a state law that penalizes election-related deepfakes. However, his expert declaration, which included parts generated by ChatGPT, was found to contain false information, leading to serious implications for the legal proceedings. This incident has raised alarm bells regarding the reliability of AI-generated content, particularly in sensitive contexts like legal testimony.

The plaintiffs contesting the Minnesota law include conservative content creator Christopher Kohls, who is known for his spoof videos, and Republican Minnesota Rep. Mary Franson. They argue that the law, revised in 2024, unlawfully restricts free speech. The plaintiffs’ legal team flagged Hancock’s declaration for referencing a fictitious study authored by "Huang, Zhang, and Wang," prompting suspicions that Hancock relied on AI capabilities to draft parts of the 12-page document. As the legal battle unfolded, concerns about the accuracy of Hancock’s claims mounted, leading to calls for the dismissal of his declaration, which was seen as riddled with potential misinformation.

During the scrutiny, Hancock acknowledged that his declaration contained two additional instances of AI-generated "hallucinations," presenting misleading text and nonsensical visuals. The AI’s fabrications were not limited to concocted studies; it also created a nonexistent article attributed to made-up authors. In defense of his actions, Hancock emphasized his extensive expertise and the broad research he has conducted on misinformation and its psychological implications. He claimed he used ChatGPT to assist with his research, and that the AI’s generation of false citations occurred inadvertently during his attempts to produce legitimate academic references.

Despite Hancock’s explanations, the plaintiffs’ attorneys accused him of perjury for having sworn to the accuracy of his sources, which were ultimately found to be fabricated. While Hancock maintained that these discrepancies did not undermine the scientific evidence or his opinions, the incident has fueled ongoing debates about the role of AI in academia and the legal system. A hearing is set for December 17 to address the validity of Hancock’s expert declaration and its potential ramifications on the ongoing case against the Minnesota law.

The fallout from Hancock’s admission calls into question broader issues concerning the use of AI in professional settings, particularly in the legal field. Notably, Hancock’s predicament is part of a troubling trend, as another legal case recently surfaced involving New York attorney Jae Lee. Lee faced disciplinary consequences after citing a fabricated case generated by ChatGPT in a medical malpractice lawsuit. This incident further underscores the risks associated with AI’s infiltration into serious professional domains where accuracy is essential.

As this case progresses, the response from Stanford University regarding possible disciplinary actions against Hancock is awaited. The implications of this incident may extend beyond Hancock himself, prompting further examination and potentially stricter regulations regarding AI’s role in producing reliable scholarship and expert testimony. The legal challenges posed by AI-generated material could lead to critical discussions about ethics, accountability, and the guidelines necessary to ensure the integrity of both legal and academic practices in an increasingly AI-dependent world.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

40 years after Chernobyl, Stasi files reveal scale of Soviet misinformation

Bangladesh, US to collaborate on combating misinformation: Information Minister |

Warning letter issued to Yan Yan over allegations of misinformation

Labour MP And Councillors Hit Back At Misinformation Being Shared Regarding Beam Park Station. – The Havering Daily

Journalist talks modern misinformation at Carmel lecture – Monterey Herald

Australians urged to “Have the Jab Chat” with their GP to help cut through vaccine misinformation

Editors Picks

AI is Capturing Democracy with Fake Citizens, Scientists Warn

April 27, 2026

NBI summons 3 vloggers over false info on Marcos’ health

April 27, 2026

Bangladesh, US to collaborate on combating misinformation: Information Minister |

April 27, 2026

EndSARS Crisis Fueled by Fake News, Not Communication Failure — Lai Mohammed

April 27, 2026

CHT Rani Yan Yan Indigenous Rights Activist Bangladesh | Govt cautions Rani Yan Yan against spreading disinformation

April 27, 2026

Latest Articles

Tejashwi Yadav accuses NDA of misleading women voters in Bihar with false poll promises

April 27, 2026

Warning letter issued to Yan Yan over allegations of misinformation

April 27, 2026

Fake news, disinformation aggravated EndSARS crisis, not communication failure — Lai Mohammed

April 27, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.