In a surprising admission, a Stanford University expert in misinformation, Jeff Hancock, has revealed that he employed artificial intelligence (AI) to draft a court document that inadvertently included numerous fabricated citations. The declaration was submitted as part of a legal challenge against a new Minnesota law aimed at prohibiting the use of AI to mislead voters during elections. The law is being contested on First Amendment grounds by legal representatives from the Hamilton Lincoln Law Institute and the Upper Midwest Law Center. After discovering the false citations, the opposing counsel petitioned the judge, seeking to have Hancock’s declaration dismissed from the case.
Hancock, who charged the state of Minnesota $600 an hour for his consulting services, acknowledged the mishap, explaining that the inaccuracies stemmed from his use of ChatGPT-4o to assist in drafting his declaration. The Minnesota Attorney General’s Office indicated in a newly filed document that Hancock did not intentionally mislead the court or the opposing counsel, clarifying that the AG’s office was also unaware of the inaccuracies until alerted by the petition filed by the opposing lawyers. They have requested that the judge permit Hancock to submit a revised declaration with accurate citations.
In a related filing, Hancock defended the use of AI in drafting legal documents, arguing that it is becoming increasingly common in legal practice. He pointed out that generative AI technology is already being integrated into tools such as Microsoft Word and Gmail. Hancock emphasized that tools like ChatGPT are widely utilized in academia and by students for research and drafting, framing the use of AI as a normal part of contemporary document preparation. His stance mirrors growing discussions surrounding the ethical implications of AI’s use within legal proceedings.
This incident adds to a growing body of case law addressing the use of AI in legal contexts. Earlier this year, a New York court ruled that lawyers have an obligation to disclose AI’s use in expert opinions, leading to the dismissal of an expert’s declaration that included AI-generated math checks. In various instances, lawyers have faced sanctions for submitting AI-generated legal documents that contained false citations, raising questions about the accuracy and ethical use of AI in legal work.
Hancock elaborated on his experience with ChatGPT-4o, explaining that he relied on the AI to survey existing literature on deep fakes and to help draft substantive sections of his declaration. He clarified that the addition of erroneous citations was unintentional, attributing the error to a misinterpretation by the AI of notes he made for himself about including citations later. Hancock emphasized that he did not intend for the program to insert citations, highlighting the complexities and potential pitfalls of relying on AI for academic and legal work.
As a prominent expert on technology and misinformation, Hancock has previously addressed the impact of AI on communication through various publications and talks, including a 2012 TED Talk. His expertise has positioned him as a sought-after consultant in multiple legal cases, although he did not clarify whether he has employed AI in those instances. While he continues to engage with AI’s implications in legal and societal contexts, ethical concerns remain, with critics highlighting the responsibility attorneys have towards maintaining integrity in the court system. Frank Bednarz from the Hamilton Lincoln Law Institute has raised concerns about the implications of Hancock’s acknowledgment of fabrications, questioning the professional ethics involved in the handling of the erroneous report by the Attorney General’s office.