AI Expert’s Testimony Tossed After Citing AI-Fabricated Research

In a case brimming with irony, a Stanford University professor specializing in artificial intelligence and misinformation had his expert testimony dismissed by a federal court judge after it was revealed he had inadvertently included fabricated information generated by an AI chatbot. Professor Jeff Hancock, founding director of the Stanford Social Media Lab, was retained by the Minnesota Attorney General’s office to provide expert testimony in defense of the state’s law criminalizing AI-generated “deepfake” election-related images. The lawsuit contesting the law was brought by a state legislator and a satirist YouTuber. However, Hancock’s reliance on an AI chatbot to assist in preparing his declaration led to the inclusion of fabricated research and citations, ultimately undermining his credibility and leading to the dismissal of his testimony.

Minnesota District Court Judge Laura Provinzino expressed the irony of the situation, noting that Hancock, an expert on the dangers of AI and misinformation, had himself fallen victim to those very dangers. Hancock’s extensive research on irony further amplified the peculiarity of the circumstances. The judge highlighted the importance of verifying AI-generated content, emphasizing that relying solely on such technology without exercising critical thinking and independent judgment can have detrimental effects on the legal profession and the court’s decision-making process.

The errors in Hancock’s declaration came to light when lawyers for the plaintiffs discovered a cited study that did not exist, authored by fabricated names, and likely generated by an AI large language model like ChatGPT. Hancock admitted to using ChatGPT 4.0 to aid in his research, explaining that the errors likely arose from the chatbot’s misinterpretation of the word "cite" as an instruction to generate fictitious citations. He acknowledged responsibility for the errors, which also included incorrect author attributions for legitimate research, and apologized to the court.

Judge Provinzino acknowledged Hancock’s qualifications as an expert on AI and deepfakes but stated that the inclusion of the fabricated information, despite Hancock’s explanation, irrevocably damaged his credibility. The judge emphasized the importance of reliability in expert testimony and pointed out the wasted time and resources incurred by the opposing party due to the flawed submission. While the Minnesota Attorney General’s office sought to submit a corrected version of Hancock’s testimony, the judge remained firm in her decision to dismiss it entirely.

The incident highlights a growing concern regarding the use of AI chatbots in professional settings, particularly in the legal field. While these tools offer the potential to revolutionize legal practice, the potential for generating false information, often referred to as "hallucinations," poses a significant risk. Hancock’s case serves as a cautionary tale, underscoring the need for careful verification and scrutiny of AI-generated content. The judge’s ruling serves as a reminder that while AI can be a valuable tool, it cannot replace human judgment and critical thinking.

This incident is not isolated. In 2023, two lawyers faced fines for submitting legal filings containing fake case citations generated by ChatGPT, demonstrating the growing pervasiveness of this issue within the legal profession. As the use of AI chatbots expands across various fields, the need for clear guidelines and safeguards against the dissemination of misinformation becomes increasingly critical. Judge Provinzino’s ruling adds to a rising chorus of legal professionals and academics advocating for the responsible and ethical use of AI technology, highlighting the importance of verification and the exercise of independent professional judgment. The case also raises questions about the potential liability and professional consequences for individuals who rely on AI-generated content without proper verification, particularly in high-stakes situations like legal proceedings. As AI technology continues to evolve, the legal and ethical implications of its use will undoubtedly continue to be scrutinized and debated.

Share.
Exit mobile version