AI Missteps in Legal Declarations: Communication Professor Faces Scrutiny for Fabricated Citations

Communication Professor Jeff Hancock has recently found himself at the center of a controversy after admitting to using AI-generated citations that were fabricated while drafting a court declaration related to the use of deepfake technology. In filings submitted to the United States District Court for the District of Minnesota, Hancock expressed regret for overlooking these so-called “hallucinated citations,” which he had sourced from the AI model GPT-4o while conducting research for a case regarding a state ban on deepfakes influencing elections. The case has proven contentious, with plaintiffs arguing that the ban violates their free speech rights, thus attracting significant attention to Hancock’s missteps.

Hancock initially submitted his expert declaration on November 1 to support the defendant, Minnesota Attorney General Keith Ellison, asserting that deepfakes could exacerbate misinformation and threaten the integrity of democratic institutions. However, his credibility took a hit when plaintiffs’ attorneys highlighted that some of the citations he included did not correspond to real scholarly articles, sparking accusations that he relied excessively on AI tools in crafting his court statement. Following these revelations, Hancock penned a follow-up letter to the court, clarifying how the inaccuracies occurred and emphasizing that he never intended to mislead anyone involved in the case.

In his admission, Hancock detailed the methodology behind his declaration, indicating that he utilized GPT-4o in conjunction with Google Scholar to compile relevant literature and citations. Unfortunately, he failed to fact-check several AI-generated entries that were ultimately inaccurate or entirely fictitious. Hancock also acknowledged an error in the authorship of an existing study, further complicating his position. “I use tools like GPT-4o to enhance the quality and efficiency of my workflow,” he stated, yet the reliance on AI proved detrimental in this instance.

The controversy has raised significant questions about the ethical use of AI in academic and legal contexts, with Hancock openly expressing his regret for any confusion caused by the fabricated citations. He maintains, however, that the substantive arguments of his declaration regarding the risks posed by deepfake technology remain valid despite the citation errors. In the wake of the revelation, the university community and students have reacted with a blend of concern and irony, particularly as Hancock had been teaching his students about the importance of proper citation practices in conjunction with broader discussions of truth and technology.

On the day following the incident, Hancock conducted his class remotely, where students were grappling with the nuances of citation and representation in academic writing. Some students expressed feelings of irony, particularly as they learned about the importance of citing diverse scholars while their professor faced scrutiny for failing to adhere to the same academic standards. The situation has sparked further discourse on the relationship between technology and accountability within educational settings, particularly as educators are increasingly incorporating algorithms and AI tools into their methodology.

As the legal case progresses, Hancock’s predicament serves as a stark reminder of the potential pitfalls associated with emerging technologies, especially within academic and professional jurisdictions. The incident raises urgent questions about the reliability and accountability of AI tools in research and legal settings, prompting a broader reflection on the ethical implications of integrating such technology into critical discourse surrounding misinformation and public communication. The outcome of this case may not only influence Hancock’s reputation and teaching career but could also set important precedents for how AI-generated content is viewed and utilized in various sectors.

Share.
Exit mobile version