Stanford Expert’s AI Misstep Raises Concerns in Legal Case Against Deepfakes

In a surprising revelation, Professor Jeff Hancock, a well-regarded authority on misinformation and the founder of the Stanford Social Media Lab, has admitted to utilizing artificial intelligence (AI) to create fabricated evidence in a federal court case. Hancock was enlisted by Minnesota Attorney General Keith Ellison to support a state law that penalizes election-related deepfakes. However, his expert declaration, which included parts generated by ChatGPT, was found to contain false information, leading to serious implications for the legal proceedings. This incident has raised alarm bells regarding the reliability of AI-generated content, particularly in sensitive contexts like legal testimony.

The plaintiffs contesting the Minnesota law include conservative content creator Christopher Kohls, who is known for his spoof videos, and Republican Minnesota Rep. Mary Franson. They argue that the law, revised in 2024, unlawfully restricts free speech. The plaintiffs’ legal team flagged Hancock’s declaration for referencing a fictitious study authored by "Huang, Zhang, and Wang," prompting suspicions that Hancock relied on AI capabilities to draft parts of the 12-page document. As the legal battle unfolded, concerns about the accuracy of Hancock’s claims mounted, leading to calls for the dismissal of his declaration, which was seen as riddled with potential misinformation.

During the scrutiny, Hancock acknowledged that his declaration contained two additional instances of AI-generated "hallucinations," presenting misleading text and nonsensical visuals. The AI’s fabrications were not limited to concocted studies; it also created a nonexistent article attributed to made-up authors. In defense of his actions, Hancock emphasized his extensive expertise and the broad research he has conducted on misinformation and its psychological implications. He claimed he used ChatGPT to assist with his research, and that the AI’s generation of false citations occurred inadvertently during his attempts to produce legitimate academic references.

Despite Hancock’s explanations, the plaintiffs’ attorneys accused him of perjury for having sworn to the accuracy of his sources, which were ultimately found to be fabricated. While Hancock maintained that these discrepancies did not undermine the scientific evidence or his opinions, the incident has fueled ongoing debates about the role of AI in academia and the legal system. A hearing is set for December 17 to address the validity of Hancock’s expert declaration and its potential ramifications on the ongoing case against the Minnesota law.

The fallout from Hancock’s admission calls into question broader issues concerning the use of AI in professional settings, particularly in the legal field. Notably, Hancock’s predicament is part of a troubling trend, as another legal case recently surfaced involving New York attorney Jae Lee. Lee faced disciplinary consequences after citing a fabricated case generated by ChatGPT in a medical malpractice lawsuit. This incident further underscores the risks associated with AI’s infiltration into serious professional domains where accuracy is essential.

As this case progresses, the response from Stanford University regarding possible disciplinary actions against Hancock is awaited. The implications of this incident may extend beyond Hancock himself, prompting further examination and potentially stricter regulations regarding AI’s role in producing reliable scholarship and expert testimony. The legal challenges posed by AI-generated material could lead to critical discussions about ethics, accountability, and the guidelines necessary to ensure the integrity of both legal and academic practices in an increasingly AI-dependent world.

Share.
Exit mobile version