Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Congress spreading misinformation on Kaleshwaram irrigation project, says Harish Rao

June 8, 2025

False Hope, Real Harm: How Online Misinformation Endangers Cancer Patients

June 8, 2025

The Miz Addresses Rumors Of WWE Exit

June 7, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Misinformation Specialist Acknowledges ChatGPT Inserted False Information into His Anti-Deepfake Legal Document

News RoomBy News RoomDecember 4, 20243 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In a recent case highlighting the intersection of artificial intelligence and legal processes, misinformation expert Jeff Hancock has been accused of utilizing AI to generate inaccuracies in a legal document. Hancock, who is the founder of the Stanford Social Media Lab, submitted an affidavit in support of Minnesota’s “Use of Deep Fake Technology to Influence an Election” law, which is currently facing challenges in federal court from conservative YouTuber Christopher Khols, known as Mr. Reagan, and Minnesota state Rep. Mary Franson. Critics quickly pointed out discrepancies in Hancock’s filing, alleging that it contained citations that were either incorrect or did not exist, thus calling into question the reliability of the entire document.

Upon scrutiny, attorneys representing Khols and Franson characterized Hancock’s affidavit as “unreliable,” prompting them to request its exclusion from the court’s consideration. The central point of contention lies in Hancock’s use of AI tools like ChatGPT to assist in organizing his citations. This has led to accusations of “hallucinations” created by the AI, which generated misleading references that undermined the legitimacy of his scholarly claims. The disclosure of these errors has raised broader concerns regarding the accuracy and accountability of AI-generated content in legal settings.

In response to the backlash, Hancock filed a subsequent declaration addressing the allegations. He acknowledged his use of ChatGPT for drafting purposes but clarified that the actual writing and review of the document were conducted by him. He asserted his integrity, stating, “I stand firmly behind each of the claims made in it,” emphasizing that all assertions reflect the latest scholarly research on AI technology’s effects on misinformation and society. Despite facing significant criticism, Hancock maintained confidence in the substantive points made in his original declaration.

Regarding the citation inaccuracies, Hancock explained that he utilized both Google Scholar and the advanced AI model GPT-4o to curate a list of relevant articles to support his declaration. He expressed regret that the tools generated two citation errors, which are now generically termed “hallucinations,” and acknowledged that incorrect authors were attached to one of the citations. These errors have been perceived as a serious lapse in the rigor expected in legal documents, leading to calls for further scrutiny of AI’s role in legal research and submissions.

In his statement, Hancock stressed his intention was never to mislead the court or opposing counsel, emphasizing his commitment to honesty and scholarly rigor. He expressed sincere regret for any confusion caused by the citation errors, reiterating that his conclusions remained intact despite the flawed references. The incident has sparked a much-needed discussion surrounding the ethical implications of using AI in professional domains and the potential risks of relying on technology that can produce fictitious information.

As the court case proceeds, the implications of Hancock’s usage of AI tools could have lasting repercussions for the legal field, raising vital questions about the credibility of evidence generated through automated means and the responsibilities of experts to ensure the accuracy of their claims. As more professionals embrace AI technologies, the legal community may need to establish clearer guidelines and standards to prevent similar incidents from occurring in the future, ensuring that the integration of AI enhances, rather than undermines, the reliability of legal processes.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Congress spreading misinformation on Kaleshwaram irrigation project, says Harish Rao

False Hope, Real Harm: How Online Misinformation Endangers Cancer Patients

Webinar | Knowing the facts: How communicators can identify and respond to vaccine misinformation – PAHO/WHO

Opinion: Donlin Gold deserves a fair hearing based on facts, not misinformation

BRS faults Congress for misinformation campaign on Kaleshwaram project

The Truth About Sun Exposure: Doctor Sets the Record Straight amid Influencer Misinformation – People.com

Editors Picks

False Hope, Real Harm: How Online Misinformation Endangers Cancer Patients

June 8, 2025

The Miz Addresses Rumors Of WWE Exit

June 7, 2025

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

June 7, 2025

Webinar | Knowing the facts: How communicators can identify and respond to vaccine misinformation – PAHO/WHO

June 7, 2025

Opinion: Donlin Gold deserves a fair hearing based on facts, not misinformation

June 7, 2025

Latest Articles

BRS faults Congress for misinformation campaign on Kaleshwaram project

June 7, 2025

The Truth About Sun Exposure: Doctor Sets the Record Straight amid Influencer Misinformation – People.com

June 7, 2025

BRS MLA Harish Rao defends Kaleshwaram Lift Irrigation Scheme, slams Congress for ‘misinformation campaign’ | Hyderabad News

June 7, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.