In a surprising turn of events, Jeff Hancock, a Stanford University expert on misinformation, has acknowledged that he used artificial intelligence (AI) to draft a court document that controversially included several fictitious citations related to AI. The declaration was intended for a legal challenge regarding a new Minnesota law that prohibits the use of AI to mislead voters in elections. The law is being contested by the Hamilton Lincoln Law Institute and the Upper Midwest Law Center, who argue that it infringes on First Amendment rights. The misattributed citations caught the attention of opposing lawyers, who subsequently filed motions to dismiss Hancock’s declaration, raising critical concerns over the accuracy and integrity of expert testimonies involving AI tools.
Hancock’s involvement in the case has been financially lucrative; he billed the Minnesota Attorney General’s Office $600 per hour for his expertise. Following the revelation of the fake citations, the Attorney General’s office stated that Hancock’s mistakes stemmed from using the AI software ChatGPT-4o, asserting that he did not aim to mislead the court or the legal counsel. The AG’s office was unaware of the inaccuracies until the opposing attorneys highlighted them, prompting a filing to allow Hancock to submit a corrected version. This incident has ignited a broader debate about the implications of relying on generative AI in legal documents and the potential for misinformation in expert opinions.
As the legal landscape grapples with the integration of AI, Hancock argues that the reliance on generative AI for drafting documents is becoming commonplace. He noted that various AI tools are increasingly embedded in widely-used software programs like Microsoft Word and Gmail. Hancock pointed out that ChatGPT is frequently utilized by both academics and students for research and drafting purposes, thereby framing his use of AI in a broader context of technological adoption within legal and academic environments. This argument suggests a shift toward acceptance of AI as a valuable resource, despite the potential for error.
The challenge over the Minnesota law is not the first to surface regarding the use of AI in legal proceedings. Earlier this year, a New York court ruling mandated that lawyers disclose the use of AI in expert opinions, with a particular case leading to an expert’s declaration being thrown out due to undisclosed reliance on Microsoft’s Copilot. Additionally, there have been instances where lawyers faced sanctions for including AI-generated content that contained fabricated citations in their briefings. These precedents underscore the rising scrutiny regarding the ethical implications of leveraging AI in legal contexts, reinforcing the notion that transparency is paramount.
In elaborating on the specifics of his situation, Hancock explained that he utilized ChatGPT’s capabilities to examine academic literature on deep fakes and help draft key arguments pertaining to his declaration. However, he contended that the erroneous citations resulted from a misunderstanding by the AI, which misinterpreted personal notes he had intended to use for later citation. He clarified, “I did not mean for GPT-4o to insert a citation,” suggesting an element of miscommunication between the user’s intention and the AI’s output. This incident raises important questions about the degree of responsibility that individuals using such tools bear for the content produced by AI.
Hancock, a recognized authority in misinformation and technology, previously gained prominence for his TED talk titled “The Future of Lying.” With over five publications focusing on AI and communication post-ChatGPT’s release, including critical analyses on AI’s capabilities of truth-telling, his expertise has been sought in various legal cases. However, Hancock has remained silent regarding whether he employed AI in prior cases or whether the AG’s office was informed about his intention to use AI for the Minnesota court document. The scrutiny surrounding this high-profile case continues, with legal experts like Frank Bednarz voicing concerns about the ethical implications of submitting a report containing inaccuracies, challenging the professional obligations attorneys have toward maintaining integrity in the courtroom.