Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Iran-US Peace Talks: Pakistan military takes disinfo drive to new level with fake news on negotiations

April 27, 2026

Ashu Reddy Responds Strongly After Fraud Complaint, Warns Against “False News”

April 27, 2026

False-flag claims spark backlash after White House Correspondents’ Dinner shooting

April 27, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Misinformation Specialist Utilizes AI to Create Misleading Testimony on AI

News RoomBy News RoomDecember 3, 20244 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In a surprising turn of events, Jeff Hancock, a Stanford University expert on misinformation, has acknowledged that he used artificial intelligence (AI) to draft a court document that controversially included several fictitious citations related to AI. The declaration was intended for a legal challenge regarding a new Minnesota law that prohibits the use of AI to mislead voters in elections. The law is being contested by the Hamilton Lincoln Law Institute and the Upper Midwest Law Center, who argue that it infringes on First Amendment rights. The misattributed citations caught the attention of opposing lawyers, who subsequently filed motions to dismiss Hancock’s declaration, raising critical concerns over the accuracy and integrity of expert testimonies involving AI tools.

Hancock’s involvement in the case has been financially lucrative; he billed the Minnesota Attorney General’s Office $600 per hour for his expertise. Following the revelation of the fake citations, the Attorney General’s office stated that Hancock’s mistakes stemmed from using the AI software ChatGPT-4o, asserting that he did not aim to mislead the court or the legal counsel. The AG’s office was unaware of the inaccuracies until the opposing attorneys highlighted them, prompting a filing to allow Hancock to submit a corrected version. This incident has ignited a broader debate about the implications of relying on generative AI in legal documents and the potential for misinformation in expert opinions.

As the legal landscape grapples with the integration of AI, Hancock argues that the reliance on generative AI for drafting documents is becoming commonplace. He noted that various AI tools are increasingly embedded in widely-used software programs like Microsoft Word and Gmail. Hancock pointed out that ChatGPT is frequently utilized by both academics and students for research and drafting purposes, thereby framing his use of AI in a broader context of technological adoption within legal and academic environments. This argument suggests a shift toward acceptance of AI as a valuable resource, despite the potential for error.

The challenge over the Minnesota law is not the first to surface regarding the use of AI in legal proceedings. Earlier this year, a New York court ruling mandated that lawyers disclose the use of AI in expert opinions, with a particular case leading to an expert’s declaration being thrown out due to undisclosed reliance on Microsoft’s Copilot. Additionally, there have been instances where lawyers faced sanctions for including AI-generated content that contained fabricated citations in their briefings. These precedents underscore the rising scrutiny regarding the ethical implications of leveraging AI in legal contexts, reinforcing the notion that transparency is paramount.

In elaborating on the specifics of his situation, Hancock explained that he utilized ChatGPT’s capabilities to examine academic literature on deep fakes and help draft key arguments pertaining to his declaration. However, he contended that the erroneous citations resulted from a misunderstanding by the AI, which misinterpreted personal notes he had intended to use for later citation. He clarified, “I did not mean for GPT-4o to insert a citation,” suggesting an element of miscommunication between the user’s intention and the AI’s output. This incident raises important questions about the degree of responsibility that individuals using such tools bear for the content produced by AI.

Hancock, a recognized authority in misinformation and technology, previously gained prominence for his TED talk titled “The Future of Lying.” With over five publications focusing on AI and communication post-ChatGPT’s release, including critical analyses on AI’s capabilities of truth-telling, his expertise has been sought in various legal cases. However, Hancock has remained silent regarding whether he employed AI in prior cases or whether the AG’s office was informed about his intention to use AI for the Minnesota court document. The scrutiny surrounding this high-profile case continues, with legal experts like Frank Bednarz voicing concerns about the ethical implications of submitting a report containing inaccuracies, challenging the professional obligations attorneys have toward maintaining integrity in the courtroom.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Journalist talks modern misinformation at Carmel lecture – Monterey Herald

Australians urged to “Have the Jab Chat” with their GP to help cut through vaccine misinformation

Feeling angry makes people more likely to share news from low-credibility sources

For Real, a Natural History of Misinformation

Governments must prioritise response to hybrid threats, says expert

Tinubu Digital Platform Launched to Counter Misinformation

Editors Picks

Ashu Reddy Responds Strongly After Fraud Complaint, Warns Against “False News”

April 27, 2026

False-flag claims spark backlash after White House Correspondents’ Dinner shooting

April 27, 2026

Pak stance on Pahalgam false flag operation vindicated globally: Tarar

April 27, 2026

Bomb Threat at Ash Flat Wal Mart Found to be False

April 27, 2026

MS NOW hosts call out left-wing false flag claims about WHCD shooting

April 27, 2026

Latest Articles

Journalist talks modern misinformation at Carmel lecture – Monterey Herald

April 26, 2026

Australians urged to “Have the Jab Chat” with their GP to help cut through vaccine misinformation

April 26, 2026

Feeling angry makes people more likely to share news from low-credibility sources

April 26, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.