Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Congress spreading misinformation on Kaleshwaram irrigation project, says Harish Rao

June 8, 2025

False Hope, Real Harm: How Online Misinformation Endangers Cancer Patients

June 8, 2025

The Miz Addresses Rumors Of WWE Exit

June 7, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Stanford Misinformation Specialist Acknowledges ChatGPT’s ‘Hallucinations’ in Court Testimony

News RoomBy News RoomDecember 4, 20243 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

AI Missteps in Legal Declarations: Communication Professor Faces Scrutiny for Fabricated Citations

Communication Professor Jeff Hancock has recently found himself at the center of a controversy after admitting to using AI-generated citations that were fabricated while drafting a court declaration related to the use of deepfake technology. In filings submitted to the United States District Court for the District of Minnesota, Hancock expressed regret for overlooking these so-called “hallucinated citations,” which he had sourced from the AI model GPT-4o while conducting research for a case regarding a state ban on deepfakes influencing elections. The case has proven contentious, with plaintiffs arguing that the ban violates their free speech rights, thus attracting significant attention to Hancock’s missteps.

Hancock initially submitted his expert declaration on November 1 to support the defendant, Minnesota Attorney General Keith Ellison, asserting that deepfakes could exacerbate misinformation and threaten the integrity of democratic institutions. However, his credibility took a hit when plaintiffs’ attorneys highlighted that some of the citations he included did not correspond to real scholarly articles, sparking accusations that he relied excessively on AI tools in crafting his court statement. Following these revelations, Hancock penned a follow-up letter to the court, clarifying how the inaccuracies occurred and emphasizing that he never intended to mislead anyone involved in the case.

In his admission, Hancock detailed the methodology behind his declaration, indicating that he utilized GPT-4o in conjunction with Google Scholar to compile relevant literature and citations. Unfortunately, he failed to fact-check several AI-generated entries that were ultimately inaccurate or entirely fictitious. Hancock also acknowledged an error in the authorship of an existing study, further complicating his position. “I use tools like GPT-4o to enhance the quality and efficiency of my workflow,” he stated, yet the reliance on AI proved detrimental in this instance.

The controversy has raised significant questions about the ethical use of AI in academic and legal contexts, with Hancock openly expressing his regret for any confusion caused by the fabricated citations. He maintains, however, that the substantive arguments of his declaration regarding the risks posed by deepfake technology remain valid despite the citation errors. In the wake of the revelation, the university community and students have reacted with a blend of concern and irony, particularly as Hancock had been teaching his students about the importance of proper citation practices in conjunction with broader discussions of truth and technology.

On the day following the incident, Hancock conducted his class remotely, where students were grappling with the nuances of citation and representation in academic writing. Some students expressed feelings of irony, particularly as they learned about the importance of citing diverse scholars while their professor faced scrutiny for failing to adhere to the same academic standards. The situation has sparked further discourse on the relationship between technology and accountability within educational settings, particularly as educators are increasingly incorporating algorithms and AI tools into their methodology.

As the legal case progresses, Hancock’s predicament serves as a stark reminder of the potential pitfalls associated with emerging technologies, especially within academic and professional jurisdictions. The incident raises urgent questions about the reliability and accountability of AI tools in research and legal settings, prompting a broader reflection on the ethical implications of integrating such technology into critical discourse surrounding misinformation and public communication. The outcome of this case may not only influence Hancock’s reputation and teaching career but could also set important precedents for how AI-generated content is viewed and utilized in various sectors.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Congress spreading misinformation on Kaleshwaram irrigation project, says Harish Rao

False Hope, Real Harm: How Online Misinformation Endangers Cancer Patients

Webinar | Knowing the facts: How communicators can identify and respond to vaccine misinformation – PAHO/WHO

Opinion: Donlin Gold deserves a fair hearing based on facts, not misinformation

BRS faults Congress for misinformation campaign on Kaleshwaram project

The Truth About Sun Exposure: Doctor Sets the Record Straight amid Influencer Misinformation – People.com

Editors Picks

False Hope, Real Harm: How Online Misinformation Endangers Cancer Patients

June 8, 2025

The Miz Addresses Rumors Of WWE Exit

June 7, 2025

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

June 7, 2025

Webinar | Knowing the facts: How communicators can identify and respond to vaccine misinformation – PAHO/WHO

June 7, 2025

Opinion: Donlin Gold deserves a fair hearing based on facts, not misinformation

June 7, 2025

Latest Articles

BRS faults Congress for misinformation campaign on Kaleshwaram project

June 7, 2025

The Truth About Sun Exposure: Doctor Sets the Record Straight amid Influencer Misinformation – People.com

June 7, 2025

BRS MLA Harish Rao defends Kaleshwaram Lift Irrigation Scheme, slams Congress for ‘misinformation campaign’ | Hyderabad News

June 7, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.