Misinformation Expert’s Testimony in Minnesota Deepfake Law Case Under Scrutiny for Suspect Citations

A legal battle over Minnesota’s new law banning the use of "deepfake" technology in elections has taken an unexpected turn, with a leading misinformation expert facing accusations of citing fabricated academic sources in his testimony supporting the legislation. Professor Jeff Hancock, founding director of the Stanford Social Media Lab and a renowned scholar on deception in the digital age, submitted an affidavit at the request of Minnesota Attorney General Keith Ellison, arguing in favor of the law. However, the veracity of his supporting evidence has come under fire, raising serious questions about the integrity of his declaration and potentially impacting the case’s outcome.

The controversy revolves around several citations within Hancock’s affidavit that appear to reference non-existent academic studies. One such citation points to a 2023 study titled "The Influence of Deepfake Videos on Political Attitudes and Behavior," purportedly published in the Journal of Information Technology & Politics. A thorough search of the journal, academic databases, and the specific pages cited reveals no trace of such a study. Instead, entirely different articles occupy the referenced pages. Another citation, flagged by legal scholar Eugene Volokh, refers to a similarly elusive study titled "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance," which also appears to be fabricated.

The plaintiffs challenging the deepfake law, including a conservative YouTuber and Republican state Rep. Mary Franson, argue that these phantom citations bear the hallmarks of "AI hallucinations," suggesting they were generated by large language models like ChatGPT. The lawyers contend that the presence of these fabricated sources casts doubt on the entire declaration, raising concerns about whether other parts of the 12-page document were also AI-generated. This revelation has injected a new layer of complexity into the legal challenge, shifting the focus from the constitutionality of the deepfake law to the credibility of the expert testimony supporting it.

The implications of these questionable citations extend beyond this particular case, touching upon broader concerns about the use of AI in legal proceedings and the potential for misinformation to permeate even expert testimony. Hancock’s expertise in technology and misinformation makes the situation particularly perplexing. If indeed the citations were generated by AI, it raises questions about why he, or someone assisting him, would rely on such a method, especially given the potential for fabrication. The lack of response from Hancock, the Stanford Social Media Lab, and Attorney General Ellison’s office further fuels the controversy, leaving many unanswered questions about the origin and intent behind the suspect citations.

The case highlights the increasing prevalence of AI-generated content and the challenges it poses to verifying information. Frank Bednarz, an attorney for the plaintiffs, points to the irony of the situation: proponents of the deepfake law argue that AI-generated content is particularly insidious because it resists traditional fact-checking methods. Yet, in this case, the alleged AI-generated content within Hancock’s declaration was exposed precisely through fact-checking, demonstrating the power of "true speech" to counter falsehoods. This underscores the ongoing debate about the appropriate response to misinformation, particularly in the digital age, where AI-generated content can blur the lines between reality and fabrication.

The incident involving Hancock’s affidavit is not an isolated case. The legal field has seen a rise in instances where AI tools, particularly ChatGPT, have been misused, leading to embarrassing and potentially damaging consequences. In 2023, two New York lawyers faced sanctions for submitting a legal brief containing fabricated case citations generated by ChatGPT. While some involved in these incidents have claimed ignorance of AI’s limitations, Hancock’s expertise in the field makes his alleged reliance on fabricated citations all the more puzzling. The incident serves as a cautionary tale about the potential pitfalls of using AI tools without proper understanding and oversight, particularly in high-stakes contexts like legal proceedings. The fact that Hancock’s declaration concludes with a statement affirming its truthfulness under penalty of perjury adds a further layer of gravity to the situation, highlighting the potential legal ramifications of submitting potentially fabricated evidence.

Share.
Exit mobile version