Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

ECI hits out at LoP Rahul Gandhi over Maharashtra poll rigging charges, warns against spreading ‘misinformation’

June 7, 2025

Disinformation & Democracy – Center for Informed Democracy & Social – cybersecurity (IDeaS)

June 7, 2025

The anatomy of a lie: Ways the public can predict and defend against Trump’s disinformation tactics

June 7, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Misinformation Specialist Utilizes AI to Create Misleading Testimony on AI

News RoomBy News RoomDecember 3, 20244 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In a surprising turn of events, Jeff Hancock, a Stanford University expert on misinformation, has acknowledged that he used artificial intelligence (AI) to draft a court document that controversially included several fictitious citations related to AI. The declaration was intended for a legal challenge regarding a new Minnesota law that prohibits the use of AI to mislead voters in elections. The law is being contested by the Hamilton Lincoln Law Institute and the Upper Midwest Law Center, who argue that it infringes on First Amendment rights. The misattributed citations caught the attention of opposing lawyers, who subsequently filed motions to dismiss Hancock’s declaration, raising critical concerns over the accuracy and integrity of expert testimonies involving AI tools.

Hancock’s involvement in the case has been financially lucrative; he billed the Minnesota Attorney General’s Office $600 per hour for his expertise. Following the revelation of the fake citations, the Attorney General’s office stated that Hancock’s mistakes stemmed from using the AI software ChatGPT-4o, asserting that he did not aim to mislead the court or the legal counsel. The AG’s office was unaware of the inaccuracies until the opposing attorneys highlighted them, prompting a filing to allow Hancock to submit a corrected version. This incident has ignited a broader debate about the implications of relying on generative AI in legal documents and the potential for misinformation in expert opinions.

As the legal landscape grapples with the integration of AI, Hancock argues that the reliance on generative AI for drafting documents is becoming commonplace. He noted that various AI tools are increasingly embedded in widely-used software programs like Microsoft Word and Gmail. Hancock pointed out that ChatGPT is frequently utilized by both academics and students for research and drafting purposes, thereby framing his use of AI in a broader context of technological adoption within legal and academic environments. This argument suggests a shift toward acceptance of AI as a valuable resource, despite the potential for error.

The challenge over the Minnesota law is not the first to surface regarding the use of AI in legal proceedings. Earlier this year, a New York court ruling mandated that lawyers disclose the use of AI in expert opinions, with a particular case leading to an expert’s declaration being thrown out due to undisclosed reliance on Microsoft’s Copilot. Additionally, there have been instances where lawyers faced sanctions for including AI-generated content that contained fabricated citations in their briefings. These precedents underscore the rising scrutiny regarding the ethical implications of leveraging AI in legal contexts, reinforcing the notion that transparency is paramount.

In elaborating on the specifics of his situation, Hancock explained that he utilized ChatGPT’s capabilities to examine academic literature on deep fakes and help draft key arguments pertaining to his declaration. However, he contended that the erroneous citations resulted from a misunderstanding by the AI, which misinterpreted personal notes he had intended to use for later citation. He clarified, “I did not mean for GPT-4o to insert a citation,” suggesting an element of miscommunication between the user’s intention and the AI’s output. This incident raises important questions about the degree of responsibility that individuals using such tools bear for the content produced by AI.

Hancock, a recognized authority in misinformation and technology, previously gained prominence for his TED talk titled “The Future of Lying.” With over five publications focusing on AI and communication post-ChatGPT’s release, including critical analyses on AI’s capabilities of truth-telling, his expertise has been sought in various legal cases. However, Hancock has remained silent regarding whether he employed AI in prior cases or whether the AG’s office was informed about his intention to use AI for the Minnesota court document. The scrutiny surrounding this high-profile case continues, with legal experts like Frank Bednarz voicing concerns about the ethical implications of submitting a report containing inaccuracies, challenging the professional obligations attorneys have toward maintaining integrity in the courtroom.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

ECI hits out at LoP Rahul Gandhi over Maharashtra poll rigging charges, warns against spreading ‘misinformation’

Misinformation About Immigrants in the 2024 Presidential Election

Mitolyn Safety Report: Exposing Fake Mitolyn Reviews, Misinformation & The Real Science Behind This Mitochondria Formula (June 2025)

Operation Social Media: Digital dogs of war bark loud, bite little in Pakistan’s info ops

TikTok, YouTube rack up views of AI-generated Pope sermons

Southern California Air Regulators Reject Healthy Air Standards, Caving to Industry Misinformation Campaign

Editors Picks

Disinformation & Democracy – Center for Informed Democracy & Social – cybersecurity (IDeaS)

June 7, 2025

The anatomy of a lie: Ways the public can predict and defend against Trump’s disinformation tactics

June 7, 2025

Misinformation About Immigrants in the 2024 Presidential Election

June 7, 2025

Mitolyn Safety Report: Exposing Fake Mitolyn Reviews, Misinformation & The Real Science Behind This Mitochondria Formula (June 2025)

June 7, 2025

US needs to ‘stop spreading disinformation,’ correct ‘wrongful actions’

June 7, 2025

Latest Articles

Rs 500 notes to be discontinued? PIB debunks false claims

June 7, 2025

Operation Social Media: Digital dogs of war bark loud, bite little in Pakistan’s info ops

June 7, 2025

Thai-Cambodian fake news spreads : Government urges caution

June 7, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.