Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Addressing Misinformation Regarding Data Center Resource Usage – Yahoo Finance

May 8, 2026

Disinformation in the City

May 8, 2026

Ministry launches strategy to combat misinformation on labour schemes

May 8, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Stanford Misinformation Specialist Acknowledges ChatGPT’s ‘Hallucinations’ in Court Testimony

News RoomBy News RoomDecember 4, 20243 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

AI Missteps in Legal Declarations: Communication Professor Faces Scrutiny for Fabricated Citations

Communication Professor Jeff Hancock has recently found himself at the center of a controversy after admitting to using AI-generated citations that were fabricated while drafting a court declaration related to the use of deepfake technology. In filings submitted to the United States District Court for the District of Minnesota, Hancock expressed regret for overlooking these so-called “hallucinated citations,” which he had sourced from the AI model GPT-4o while conducting research for a case regarding a state ban on deepfakes influencing elections. The case has proven contentious, with plaintiffs arguing that the ban violates their free speech rights, thus attracting significant attention to Hancock’s missteps.

Hancock initially submitted his expert declaration on November 1 to support the defendant, Minnesota Attorney General Keith Ellison, asserting that deepfakes could exacerbate misinformation and threaten the integrity of democratic institutions. However, his credibility took a hit when plaintiffs’ attorneys highlighted that some of the citations he included did not correspond to real scholarly articles, sparking accusations that he relied excessively on AI tools in crafting his court statement. Following these revelations, Hancock penned a follow-up letter to the court, clarifying how the inaccuracies occurred and emphasizing that he never intended to mislead anyone involved in the case.

In his admission, Hancock detailed the methodology behind his declaration, indicating that he utilized GPT-4o in conjunction with Google Scholar to compile relevant literature and citations. Unfortunately, he failed to fact-check several AI-generated entries that were ultimately inaccurate or entirely fictitious. Hancock also acknowledged an error in the authorship of an existing study, further complicating his position. “I use tools like GPT-4o to enhance the quality and efficiency of my workflow,” he stated, yet the reliance on AI proved detrimental in this instance.

The controversy has raised significant questions about the ethical use of AI in academic and legal contexts, with Hancock openly expressing his regret for any confusion caused by the fabricated citations. He maintains, however, that the substantive arguments of his declaration regarding the risks posed by deepfake technology remain valid despite the citation errors. In the wake of the revelation, the university community and students have reacted with a blend of concern and irony, particularly as Hancock had been teaching his students about the importance of proper citation practices in conjunction with broader discussions of truth and technology.

On the day following the incident, Hancock conducted his class remotely, where students were grappling with the nuances of citation and representation in academic writing. Some students expressed feelings of irony, particularly as they learned about the importance of citing diverse scholars while their professor faced scrutiny for failing to adhere to the same academic standards. The situation has sparked further discourse on the relationship between technology and accountability within educational settings, particularly as educators are increasingly incorporating algorithms and AI tools into their methodology.

As the legal case progresses, Hancock’s predicament serves as a stark reminder of the potential pitfalls associated with emerging technologies, especially within academic and professional jurisdictions. The incident raises urgent questions about the reliability and accountability of AI tools in research and legal settings, prompting a broader reflection on the ethical implications of integrating such technology into critical discourse surrounding misinformation and public communication. The outcome of this case may not only influence Hancock’s reputation and teaching career but could also set important precedents for how AI-generated content is viewed and utilized in various sectors.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Addressing Misinformation Regarding Data Center Resource Usage – Yahoo Finance

Ministry launches strategy to combat misinformation on labour schemes

Misinformation minefield: How to tell if online medical advice can be trusted

Reforms should target misinformation that undermines voters’ trust, says elections chief

After Dobbs, a Computer Scientist Targets Contraceptive Misinformation Online

Health expert warns of ‘pandemic panic’ as cruise ship hantavirus outbreak claims three lives

Editors Picks

Disinformation in the City

May 8, 2026

Ministry launches strategy to combat misinformation on labour schemes

May 8, 2026

Authorities detain curator of Russian disinformation network in Argentina

May 8, 2026

Misinformation minefield: How to tell if online medical advice can be trusted

May 7, 2026

Stakeholders: Disinformation erodes citizens’ confidence in democratic institutions

May 7, 2026

Latest Articles

Combating disinformation must not be confused with censorship

May 7, 2026

Reforms should target misinformation that undermines voters’ trust, says elections chief

May 7, 2026

Argentina to expel Russian citizen suspected of running disinformation network in Latin America

May 7, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.