Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Webinar | Knowing the facts: How communicators can identify and respond to vaccine misinformation – PAHO/WHO

June 7, 2025

Opinion: Donlin Gold deserves a fair hearing based on facts, not misinformation

June 7, 2025

BRS faults Congress for misinformation campaign on Kaleshwaram project

June 7, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Misinformation Specialist Utilizes AI to Create Misleading Testimony on AI

News RoomBy News RoomDecember 3, 20244 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In a surprising turn of events, Jeff Hancock, a Stanford University expert on misinformation, has acknowledged that he used artificial intelligence (AI) to draft a court document that controversially included several fictitious citations related to AI. The declaration was intended for a legal challenge regarding a new Minnesota law that prohibits the use of AI to mislead voters in elections. The law is being contested by the Hamilton Lincoln Law Institute and the Upper Midwest Law Center, who argue that it infringes on First Amendment rights. The misattributed citations caught the attention of opposing lawyers, who subsequently filed motions to dismiss Hancock’s declaration, raising critical concerns over the accuracy and integrity of expert testimonies involving AI tools.

Hancock’s involvement in the case has been financially lucrative; he billed the Minnesota Attorney General’s Office $600 per hour for his expertise. Following the revelation of the fake citations, the Attorney General’s office stated that Hancock’s mistakes stemmed from using the AI software ChatGPT-4o, asserting that he did not aim to mislead the court or the legal counsel. The AG’s office was unaware of the inaccuracies until the opposing attorneys highlighted them, prompting a filing to allow Hancock to submit a corrected version. This incident has ignited a broader debate about the implications of relying on generative AI in legal documents and the potential for misinformation in expert opinions.

As the legal landscape grapples with the integration of AI, Hancock argues that the reliance on generative AI for drafting documents is becoming commonplace. He noted that various AI tools are increasingly embedded in widely-used software programs like Microsoft Word and Gmail. Hancock pointed out that ChatGPT is frequently utilized by both academics and students for research and drafting purposes, thereby framing his use of AI in a broader context of technological adoption within legal and academic environments. This argument suggests a shift toward acceptance of AI as a valuable resource, despite the potential for error.

The challenge over the Minnesota law is not the first to surface regarding the use of AI in legal proceedings. Earlier this year, a New York court ruling mandated that lawyers disclose the use of AI in expert opinions, with a particular case leading to an expert’s declaration being thrown out due to undisclosed reliance on Microsoft’s Copilot. Additionally, there have been instances where lawyers faced sanctions for including AI-generated content that contained fabricated citations in their briefings. These precedents underscore the rising scrutiny regarding the ethical implications of leveraging AI in legal contexts, reinforcing the notion that transparency is paramount.

In elaborating on the specifics of his situation, Hancock explained that he utilized ChatGPT’s capabilities to examine academic literature on deep fakes and help draft key arguments pertaining to his declaration. However, he contended that the erroneous citations resulted from a misunderstanding by the AI, which misinterpreted personal notes he had intended to use for later citation. He clarified, “I did not mean for GPT-4o to insert a citation,” suggesting an element of miscommunication between the user’s intention and the AI’s output. This incident raises important questions about the degree of responsibility that individuals using such tools bear for the content produced by AI.

Hancock, a recognized authority in misinformation and technology, previously gained prominence for his TED talk titled “The Future of Lying.” With over five publications focusing on AI and communication post-ChatGPT’s release, including critical analyses on AI’s capabilities of truth-telling, his expertise has been sought in various legal cases. However, Hancock has remained silent regarding whether he employed AI in prior cases or whether the AG’s office was informed about his intention to use AI for the Minnesota court document. The scrutiny surrounding this high-profile case continues, with legal experts like Frank Bednarz voicing concerns about the ethical implications of submitting a report containing inaccuracies, challenging the professional obligations attorneys have toward maintaining integrity in the courtroom.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Webinar | Knowing the facts: How communicators can identify and respond to vaccine misinformation – PAHO/WHO

Opinion: Donlin Gold deserves a fair hearing based on facts, not misinformation

BRS faults Congress for misinformation campaign on Kaleshwaram project

The Truth About Sun Exposure: Doctor Sets the Record Straight amid Influencer Misinformation – People.com

BRS MLA Harish Rao defends Kaleshwaram Lift Irrigation Scheme, slams Congress for ‘misinformation campaign’ | Hyderabad News

Westfield Health Bulletin: Health and vaccine misinformation puts people at risk

Editors Picks

Opinion: Donlin Gold deserves a fair hearing based on facts, not misinformation

June 7, 2025

BRS faults Congress for misinformation campaign on Kaleshwaram project

June 7, 2025

The Truth About Sun Exposure: Doctor Sets the Record Straight amid Influencer Misinformation – People.com

June 7, 2025

BRS MLA Harish Rao defends Kaleshwaram Lift Irrigation Scheme, slams Congress for ‘misinformation campaign’ | Hyderabad News

June 7, 2025

Westfield Health Bulletin: Health and vaccine misinformation puts people at risk

June 7, 2025

Latest Articles

Ukraine rejects claims of delaying exchange of soldiers’ bodies, calls out Russian disinformation

June 7, 2025

Doctor Sets the Record Straight amid Influencer Misinformation

June 7, 2025

Misinformation On RCB’s IPL Win, Russia-Ukraine Conflict & More

June 7, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.