Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Vittal: Police file case against private web news portal for spreading false information

July 13, 2025

‘We’re in various stages of grief and still trying to make sense of what just happened’

July 13, 2025

Misinformation is already a problem during natural disasters in Texas. AI chatbots aren't helping – The Daily Gazette

July 13, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Stanford Misinformation Specialist Acknowledges ChatGPT’s ‘Hallucinations’ in Court Testimony

News RoomBy News RoomDecember 4, 20243 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

AI Missteps in Legal Declarations: Communication Professor Faces Scrutiny for Fabricated Citations

Communication Professor Jeff Hancock has recently found himself at the center of a controversy after admitting to using AI-generated citations that were fabricated while drafting a court declaration related to the use of deepfake technology. In filings submitted to the United States District Court for the District of Minnesota, Hancock expressed regret for overlooking these so-called “hallucinated citations,” which he had sourced from the AI model GPT-4o while conducting research for a case regarding a state ban on deepfakes influencing elections. The case has proven contentious, with plaintiffs arguing that the ban violates their free speech rights, thus attracting significant attention to Hancock’s missteps.

Hancock initially submitted his expert declaration on November 1 to support the defendant, Minnesota Attorney General Keith Ellison, asserting that deepfakes could exacerbate misinformation and threaten the integrity of democratic institutions. However, his credibility took a hit when plaintiffs’ attorneys highlighted that some of the citations he included did not correspond to real scholarly articles, sparking accusations that he relied excessively on AI tools in crafting his court statement. Following these revelations, Hancock penned a follow-up letter to the court, clarifying how the inaccuracies occurred and emphasizing that he never intended to mislead anyone involved in the case.

In his admission, Hancock detailed the methodology behind his declaration, indicating that he utilized GPT-4o in conjunction with Google Scholar to compile relevant literature and citations. Unfortunately, he failed to fact-check several AI-generated entries that were ultimately inaccurate or entirely fictitious. Hancock also acknowledged an error in the authorship of an existing study, further complicating his position. “I use tools like GPT-4o to enhance the quality and efficiency of my workflow,” he stated, yet the reliance on AI proved detrimental in this instance.

The controversy has raised significant questions about the ethical use of AI in academic and legal contexts, with Hancock openly expressing his regret for any confusion caused by the fabricated citations. He maintains, however, that the substantive arguments of his declaration regarding the risks posed by deepfake technology remain valid despite the citation errors. In the wake of the revelation, the university community and students have reacted with a blend of concern and irony, particularly as Hancock had been teaching his students about the importance of proper citation practices in conjunction with broader discussions of truth and technology.

On the day following the incident, Hancock conducted his class remotely, where students were grappling with the nuances of citation and representation in academic writing. Some students expressed feelings of irony, particularly as they learned about the importance of citing diverse scholars while their professor faced scrutiny for failing to adhere to the same academic standards. The situation has sparked further discourse on the relationship between technology and accountability within educational settings, particularly as educators are increasingly incorporating algorithms and AI tools into their methodology.

As the legal case progresses, Hancock’s predicament serves as a stark reminder of the potential pitfalls associated with emerging technologies, especially within academic and professional jurisdictions. The incident raises urgent questions about the reliability and accountability of AI tools in research and legal settings, prompting a broader reflection on the ethical implications of integrating such technology into critical discourse surrounding misinformation and public communication. The outcome of this case may not only influence Hancock’s reputation and teaching career but could also set important precedents for how AI-generated content is viewed and utilized in various sectors.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Misinformation is already a problem during natural disasters in Texas. AI chatbots aren't helping – The Daily Gazette

Why We Identify With Deadly Misinformation – Byline Times

Iran Embassy In India Flags ‘Fake Channels’ Spreading Misinformation To Harm Ties | India News

X’s Community Notes: Over 90% Go Unnoticed, Raising Questions About Their Effectiveness in Combating Misinformation

Did cloud seeding cause Texas floods? Misinformation spreads as severe flooding strikes US

Kanwar Yatra Misinformation: UP Police Book X Handle Over Fake Vandalism Claims – Deccan Herald

Editors Picks

‘We’re in various stages of grief and still trying to make sense of what just happened’

July 13, 2025

Misinformation is already a problem during natural disasters in Texas. AI chatbots aren't helping – The Daily Gazette

July 13, 2025

Lawyer disbarred over false police report

July 12, 2025

Tucker Carlson’s interview with Pezeshkian was used to spread disinformation.

July 12, 2025

Children’s Trust Escambia County Commissioners at odds over taxes

July 12, 2025

Latest Articles

Why We Identify With Deadly Misinformation – Byline Times

July 12, 2025

Researchers warn of manipulation of recall information

July 12, 2025

‘REALLY GOOD EXERCISE,’ RODGERS POSITIVE AFTER KT’S FALSE START

July 12, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.