Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Diddy drama goes viral! AI-powered YouTube videos fuel misinformation boom

June 30, 2025

UN Expert Calls for ‘Defossilization’ of World Economy, Criminal Penalties for Big Oil Climate Disinformation

June 30, 2025

Lebanese customs seize nearly $8 million at Beirut Airport over false declarations — The details | News Bulletin 30/06/2025 – LBCI Lebanon

June 30, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Misinformation Specialist Utilizes AI to Create Misleading Testimony on AI

News RoomBy News RoomDecember 3, 20244 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In a surprising turn of events, Jeff Hancock, a Stanford University expert on misinformation, has acknowledged that he used artificial intelligence (AI) to draft a court document that controversially included several fictitious citations related to AI. The declaration was intended for a legal challenge regarding a new Minnesota law that prohibits the use of AI to mislead voters in elections. The law is being contested by the Hamilton Lincoln Law Institute and the Upper Midwest Law Center, who argue that it infringes on First Amendment rights. The misattributed citations caught the attention of opposing lawyers, who subsequently filed motions to dismiss Hancock’s declaration, raising critical concerns over the accuracy and integrity of expert testimonies involving AI tools.

Hancock’s involvement in the case has been financially lucrative; he billed the Minnesota Attorney General’s Office $600 per hour for his expertise. Following the revelation of the fake citations, the Attorney General’s office stated that Hancock’s mistakes stemmed from using the AI software ChatGPT-4o, asserting that he did not aim to mislead the court or the legal counsel. The AG’s office was unaware of the inaccuracies until the opposing attorneys highlighted them, prompting a filing to allow Hancock to submit a corrected version. This incident has ignited a broader debate about the implications of relying on generative AI in legal documents and the potential for misinformation in expert opinions.

As the legal landscape grapples with the integration of AI, Hancock argues that the reliance on generative AI for drafting documents is becoming commonplace. He noted that various AI tools are increasingly embedded in widely-used software programs like Microsoft Word and Gmail. Hancock pointed out that ChatGPT is frequently utilized by both academics and students for research and drafting purposes, thereby framing his use of AI in a broader context of technological adoption within legal and academic environments. This argument suggests a shift toward acceptance of AI as a valuable resource, despite the potential for error.

The challenge over the Minnesota law is not the first to surface regarding the use of AI in legal proceedings. Earlier this year, a New York court ruling mandated that lawyers disclose the use of AI in expert opinions, with a particular case leading to an expert’s declaration being thrown out due to undisclosed reliance on Microsoft’s Copilot. Additionally, there have been instances where lawyers faced sanctions for including AI-generated content that contained fabricated citations in their briefings. These precedents underscore the rising scrutiny regarding the ethical implications of leveraging AI in legal contexts, reinforcing the notion that transparency is paramount.

In elaborating on the specifics of his situation, Hancock explained that he utilized ChatGPT’s capabilities to examine academic literature on deep fakes and help draft key arguments pertaining to his declaration. However, he contended that the erroneous citations resulted from a misunderstanding by the AI, which misinterpreted personal notes he had intended to use for later citation. He clarified, “I did not mean for GPT-4o to insert a citation,” suggesting an element of miscommunication between the user’s intention and the AI’s output. This incident raises important questions about the degree of responsibility that individuals using such tools bear for the content produced by AI.

Hancock, a recognized authority in misinformation and technology, previously gained prominence for his TED talk titled “The Future of Lying.” With over five publications focusing on AI and communication post-ChatGPT’s release, including critical analyses on AI’s capabilities of truth-telling, his expertise has been sought in various legal cases. However, Hancock has remained silent regarding whether he employed AI in prior cases or whether the AG’s office was informed about his intention to use AI for the Minnesota court document. The scrutiny surrounding this high-profile case continues, with legal experts like Frank Bednarz voicing concerns about the ethical implications of submitting a report containing inaccuracies, challenging the professional obligations attorneys have toward maintaining integrity in the courtroom.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Diddy drama goes viral! AI-powered YouTube videos fuel misinformation boom

Former Newsnight presenter warns of misinformation deluge

Indian tech hub state pushes jail terms for ‘fake news’, sparks worries

How to tell real parenting advice from misinformation

Fake Diddy Videos: The Wild West of AI-Generated Misinformation on YouTube

Chair of Spanish electricity grid hits out at ‘misinformation’ after mass blackouts

Editors Picks

UN Expert Calls for ‘Defossilization’ of World Economy, Criminal Penalties for Big Oil Climate Disinformation

June 30, 2025

Lebanese customs seize nearly $8 million at Beirut Airport over false declarations — The details | News Bulletin 30/06/2025 – LBCI Lebanon

June 30, 2025

Former Newsnight presenter warns of misinformation deluge

June 30, 2025

China-Russia Convergence in Foreign Information Manipulation 

June 30, 2025

Indian tech hub state pushes jail terms for ‘fake news’, sparks worries

June 30, 2025

Latest Articles

India’s Disinformation Campaign on CPEC and AJK

June 30, 2025

How to tell real parenting advice from misinformation

June 30, 2025

Fake Diddy Videos: The Wild West of AI-Generated Misinformation on YouTube

June 30, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.