AI Misinformation Expert Falls Prey to AI Hallucination in Minnesota Deepfake Lawsuit
A legal challenge to Minnesota’s law prohibiting the malicious use of deepfakes in political campaigns has taken an ironic turn, highlighting the very dangers the law seeks to address. Jeff Hancock, a Stanford University professor and expert on AI and misinformation, submitted an expert declaration supporting the state’s defense of the law, only to have it revealed that the declaration contained citations to non-existent academic articles generated by the AI chatbot GPT-4. This incident underscores the growing concern over the reliability of AI-generated content in legal proceedings and the crucial need for rigorous verification.
The case centers on Minnesota’s statute prohibiting the dissemination of deepfakes – manipulated videos or images that appear authentic – with the intent to harm a political candidate or influence an election. Plaintiffs argue that the law infringes on First Amendment rights and have sought a preliminary injunction to prevent its enforcement. Minnesota Attorney General Keith Ellison, in defending the law, submitted expert declarations, including one from Professor Hancock, to underscore the potential threat deepfakes pose to free speech and democratic processes.
However, the defense’s case was undermined when it came to light that Professor Hancock’s declaration contained fabricated citations. Attorney General Ellison acknowledged the errors, attributing them to Professor Hancock’s reliance on GPT-4 for drafting assistance. Professor Hancock admitted to using the AI tool and failing to verify the generated citations before submitting the declaration under penalty of perjury. While maintaining the accuracy of the substantive arguments in his declaration, the presence of fabricated citations severely damaged his credibility.
The incident has drawn attention to the increasing use of AI in legal research and writing, and the potential pitfalls associated with unchecked reliance on such tools. While acknowledging AI’s potential to revolutionize legal practice and improve access to justice, the court emphasized the critical importance of maintaining human oversight and critical thinking. The judge explicitly warned against abdicating independent judgment in favor of AI-generated content, highlighting the potential for such reliance to negatively impact the quality of legal work and judicial decision-making.
This case echoes a growing number of instances where AI-generated inaccuracies have disrupted legal proceedings. Courts across the country have issued sanctions and rebukes to attorneys who submitted filings containing fabricated citations generated by AI. The Minnesota court joined this chorus, emphasizing the non-delegable responsibility of attorneys to ensure the accuracy of all submitted materials, particularly those signed under penalty of perjury.
In light of the compromised credibility of the original declaration, the court rejected Attorney General Ellison’s request to file an amended version. While acknowledging Professor Hancock’s expertise on AI and misinformation, the judge deemed the damage irreparable. The court stressed the importance of trust in declarations made under oath and the detrimental impact of false citations on the integrity of legal proceedings. The incident serves as a stark reminder of the critical need for vigilance and verification when utilizing AI tools in legal contexts. The court urged attorneys to implement procedures to verify AI-generated content, including explicitly inquiring about the use of AI in the drafting process of witness declarations. This case stands as a cautionary tale, highlighting the potential consequences of over-reliance on AI and underscoring the enduring importance of human judgment and rigorous fact-checking in the legal profession.