Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

WNEP – YouTube

August 29, 2025

USC shooter scare prompts misinformation concerns in SC

August 27, 2025

Verifying Russian propagandists’ claim that Ukraine has lost 1.7 million soldiers

August 27, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Stanford Misinformation Expert Faces Allegations of AI-Fabricated Court Testimony

News RoomBy News RoomDecember 7, 20244 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Stanford Professor’s Court Testimony on Deepfakes Questioned Amidst Allegations of AI Fabrication

A prominent Stanford University communication professor, Jeff Hancock, an expert on technology and misinformation, has found himself embroiled in controversy after submitting a court declaration riddled with questionable citations. Hancock, the founding director of Stanford’s Social Media Lab, testified in a Minnesota court case challenging the state’s 2023 law criminalizing the use of deepfakes to manipulate elections. His declaration, submitted in defense of the law, contained two citations to academic journal articles that appear to be entirely fabricated, raising concerns about the veracity of his testimony and prompting accusations of AI assistance in crafting the document.

The case revolves around a lawsuit filed by Republican Minnesota State Representative Mary Franson and conservative social media satirist Christopher Kohls, who argue that the deepfake law infringes upon their First Amendment rights. Hancock, testifying on behalf of Minnesota Attorney General Keith Ellison, asserted that deepfakes pose a significant threat to election integrity by enhancing the persuasiveness of misinformation and circumventing traditional fact-checking methods. His testimony, provided at a rate of $600 per hour, was made under penalty of perjury, attesting to the "truth and correctness" of his statements.

However, the seemingly unimpeachable nature of Hancock’s expert testimony has been called into question. Independent investigations conducted by various news outlets, including The Daily, have failed to locate the two cited journal articles—"Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance” and “The Influence of Deepfake Videos on Political Attitudes and Behavior"—within any reputable academic databases or the purported journals’ archives. These articles appear to be nonexistent, casting a shadow of doubt over Hancock’s research and potentially undermining the credibility of his entire declaration.

The plaintiffs’ attorney, Frank Berdnarz, seized upon these discrepancies, filing a motion to exclude Hancock’s declaration from the court’s consideration. Berdnarz argued that the fabricated citations bear the hallmarks of "AI hallucinations," suggesting the use of large language models like ChatGPT in generating the non-existent references. He further contended that the presence of these fictitious citations raises serious questions about the overall quality and reliability of Hancock’s testimony, insinuating that the professor or his assistants failed to perform even basic verification checks.

The implications of these allegations extend beyond the immediate legal battle. Hancock, a recognized expert frequently consulted on matters of technology and misinformation, recently appeared in a Netflix documentary alongside Bill Gates, discussing the future of AI. He is also scheduled to teach a Stanford course in the spring titled "Truth, Trust, and Tech," focusing on deception and communication technology. The controversy surrounding his court testimony threatens to damage his reputation and cast doubt on his expertise in these areas.

This incident also highlights the growing concerns surrounding the use of AI in academic and legal contexts. The ease with which AI tools can generate seemingly plausible but ultimately fabricated information raises serious questions about the integrity of research and the potential for misuse. As AI technology becomes increasingly sophisticated, the ability to distinguish between human-generated and AI-generated content becomes increasingly challenging, necessitating robust verification methods and heightened scrutiny. The accusations against Hancock serve as a cautionary tale, underscoring the importance of rigorous fact-checking and the potential consequences of relying on unverified AI-generated information. Furthermore, the case adds another layer to the ongoing debate on the regulation of deepfakes and the balance between protecting free speech and preventing the spread of misinformation. The outcome of the Minnesota case and the subsequent investigation into Hancock’s testimony will undoubtedly have significant implications for the future of deepfake legislation and the role of AI in the legal and academic spheres. It remains to be seen how this incident will impact the broader discussion on the ethical implications of AI and its potential to erode trust in expert testimony.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

USC shooter scare prompts misinformation concerns in SC

Elon Musk slammed for spreading misinformation after Dundee ‘blade’ incident

Police issue misinformation warning after 12-year-old girl charged with carrying weapon in Dundee

Syria: The Misplaced Focus on ‘Misinformation’

Mayor Says ‘Misinformation’ on Airport Affecting Staff, Commission / iBerkshires.com

Chinese Community in Ghana unhappy with misinformation On Z9 Helicopter in Akrofuom Crash

Editors Picks

USC shooter scare prompts misinformation concerns in SC

August 27, 2025

Verifying Russian propagandists’ claim that Ukraine has lost 1.7 million soldiers

August 27, 2025

Elon Musk slammed for spreading misinformation after Dundee ‘blade’ incident

August 27, 2025

Indonesia summons TikTok & Meta, ask them to act on harmful

August 27, 2025

Police Scotland issues ‘misinformation’ warning after girl, 12, charged in Dundee

August 27, 2025

Latest Articles

Police issue misinformation warning after 12-year-old girl charged with carrying weapon in Dundee

August 27, 2025

After a lifetime developing vaccines, this ASU researcher’s new challenge is disinformation

August 27, 2025

Police issue ‘misinformation’ warning after 12-year-old girl arrested in Dundee

August 27, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.