Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Explaining Operation Sindoor to my teenager, or why misinformation helps nobody

May 9, 2025

Defending Against Deepfakes and Disinformation

May 9, 2025

Bhatti Vikramarka calls for siren alert in Hyderabad and curbing fake news on social media

May 9, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Expert Witness on Misinformation Cited Fabricated Sources in Minnesota Deepfake Case

News RoomBy News RoomDecember 29, 20244 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Misinformation Expert’s Testimony in Minnesota Deepfake Law Case Under Scrutiny for Suspect Citations

A legal battle over Minnesota’s new law banning the use of "deepfake" technology in elections has taken an unexpected turn, with a leading misinformation expert facing accusations of citing fabricated academic sources in his testimony supporting the legislation. Professor Jeff Hancock, founding director of the Stanford Social Media Lab and a renowned scholar on deception in the digital age, submitted an affidavit at the request of Minnesota Attorney General Keith Ellison, arguing in favor of the law. However, the veracity of his supporting evidence has come under fire, raising serious questions about the integrity of his declaration and potentially impacting the case’s outcome.

The controversy revolves around several citations within Hancock’s affidavit that appear to reference non-existent academic studies. One such citation points to a 2023 study titled "The Influence of Deepfake Videos on Political Attitudes and Behavior," purportedly published in the Journal of Information Technology & Politics. A thorough search of the journal, academic databases, and the specific pages cited reveals no trace of such a study. Instead, entirely different articles occupy the referenced pages. Another citation, flagged by legal scholar Eugene Volokh, refers to a similarly elusive study titled "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance," which also appears to be fabricated.

The plaintiffs challenging the deepfake law, including a conservative YouTuber and Republican state Rep. Mary Franson, argue that these phantom citations bear the hallmarks of "AI hallucinations," suggesting they were generated by large language models like ChatGPT. The lawyers contend that the presence of these fabricated sources casts doubt on the entire declaration, raising concerns about whether other parts of the 12-page document were also AI-generated. This revelation has injected a new layer of complexity into the legal challenge, shifting the focus from the constitutionality of the deepfake law to the credibility of the expert testimony supporting it.

The implications of these questionable citations extend beyond this particular case, touching upon broader concerns about the use of AI in legal proceedings and the potential for misinformation to permeate even expert testimony. Hancock’s expertise in technology and misinformation makes the situation particularly perplexing. If indeed the citations were generated by AI, it raises questions about why he, or someone assisting him, would rely on such a method, especially given the potential for fabrication. The lack of response from Hancock, the Stanford Social Media Lab, and Attorney General Ellison’s office further fuels the controversy, leaving many unanswered questions about the origin and intent behind the suspect citations.

The case highlights the increasing prevalence of AI-generated content and the challenges it poses to verifying information. Frank Bednarz, an attorney for the plaintiffs, points to the irony of the situation: proponents of the deepfake law argue that AI-generated content is particularly insidious because it resists traditional fact-checking methods. Yet, in this case, the alleged AI-generated content within Hancock’s declaration was exposed precisely through fact-checking, demonstrating the power of "true speech" to counter falsehoods. This underscores the ongoing debate about the appropriate response to misinformation, particularly in the digital age, where AI-generated content can blur the lines between reality and fabrication.

The incident involving Hancock’s affidavit is not an isolated case. The legal field has seen a rise in instances where AI tools, particularly ChatGPT, have been misused, leading to embarrassing and potentially damaging consequences. In 2023, two New York lawyers faced sanctions for submitting a legal brief containing fabricated case citations generated by ChatGPT. While some involved in these incidents have claimed ignorance of AI’s limitations, Hancock’s expertise in the field makes his alleged reliance on fabricated citations all the more puzzling. The incident serves as a cautionary tale about the potential pitfalls of using AI tools without proper understanding and oversight, particularly in high-stakes contexts like legal proceedings. The fact that Hancock’s declaration concludes with a statement affirming its truthfulness under penalty of perjury adds a further layer of gravity to the situation, highlighting the potential legal ramifications of submitting potentially fabricated evidence.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Explaining Operation Sindoor to my teenager, or why misinformation helps nobody

Reining in misinformation on live horse exports: Senator Plett

Tarar slams India’s misinformation campaign aimed at misleading its people

Joe Rogan & Other Top Podcasts Spread Climate Disinfo, Research Finds

Superstar Yash Salutes Indian Armed Forces, Urges Public to Fight Misinformation

Digital Chaos in Pakistan? Social Handles And Misinformation Raise Cyberattack Concerns

Editors Picks

Defending Against Deepfakes and Disinformation

May 9, 2025

Bhatti Vikramarka calls for siren alert in Hyderabad and curbing fake news on social media

May 9, 2025

Reining in misinformation on live horse exports: Senator Plett

May 9, 2025

India accuses Pakistan of disinformation – breakingthenews.net

May 9, 2025

Tarar slams India’s misinformation campaign aimed at misleading its people

May 9, 2025

Latest Articles

India Slams Pakistan For Sinking To New Depths ‘In Quest For Disinformation’

May 9, 2025

AI-based monitoring platform in works to check fake news, rumours on social media

May 9, 2025

Joe Rogan & Other Top Podcasts Spread Climate Disinfo, Research Finds

May 9, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.