Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

40 years after Chernobyl, Stasi files reveal scale of Soviet misinformation

April 27, 2026

AI is Capturing Democracy with Fake Citizens, Scientists Warn

April 27, 2026

NBI summons 3 vloggers over false info on Marcos’ health

April 27, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Misinformation Specialist Blames AI for the False Information He Cited in Support of Anti-Misinformation Legislation

News RoomBy News RoomDecember 4, 20243 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In a surprising turn of events, Jeff Hancock, a communications professor at Stanford University, has found himself at the center of controversy over the inclusion of fabricated citations in a legal affidavit supporting Minnesota’s anti-misinformation law. As reported by SFGate, Hancock claims these inaccuracies arose inadvertently while using a new version of ChatGPT. He explained that he had intended for the AI tool to insert placeholder text “[cite]” in specific paragraphs, with the plan to later identify and include proper references. However, the AI produced non-existent citations instead, leading to the misrepresentation within his affidavit.

The Minnesota Attorney General’s Office, which retained Hancock’s services, has defended the professor, stating that he had no intention to mislead the court or opposing counsel by including these AI-generated errors. This incident highlights the growing complexities surrounding the integration of artificial intelligence in professional and academic contexts, especially regarding the reliability and accuracy of information produced by such models. The situation raises significant questions about accountability when AI tools are misused or misconfigured.

Hancock’s affidavit was crucial for the legal defense of a newly established anti-misinformation law in Minnesota, passed in 2023. This law aims to curb the influence of misleading information, particularly concerning electoral processes and the distribution of deepfake content. The law is currently facing a legal challenge, where opponents argue that it infringes upon freedom of speech protections. This ongoing litigation underscores the tensions between combating misinformation and upholding constitutional rights.

In light of the fabricated citations, Hancock has submitted an amended version of his affidavit to the court. This revision aims to rectify the errors and clarify his initial statements in support of the Minnesota law. The swift action taken by Hancock signifies an acknowledgment of the high stakes involved in legal proceedings, particularly when they pertain to regulations that seek to navigate the intricate balance between free expression and the promotion of accurate information in the public sphere.

The incident serves as a cautionary tale about the potential pitfalls associated with the use of AI technologies in sensitive fields such as law and public policy. As communication increasingly relies on digital tools and artificial intelligence, it is essential for professionals to exercise rigorous scrutiny over the outputs generated by these systems. The reliance on AI in scholarly and legal contexts may lead to unintentional consequences, as demonstrated by Hancock’s experience, prompting calls for improved oversight and better educational resources for users of these technologies.

In conclusion, the case involving Jeff Hancock illustrates the complex interplay between artificial intelligence, misinformation, and legal frameworks designed to combat false narratives. As society continues to grapple with the rapid evolution of technology, there is an urgent need to develop robust protocols to ensure accurate information dissemination. Ultimately, the responsibility lies with users to remain vigilant and critical of the tools at their disposal, particularly in high-impact areas such as law, where misinformation can carry severe consequences.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

40 years after Chernobyl, Stasi files reveal scale of Soviet misinformation

Bangladesh, US to collaborate on combating misinformation: Information Minister |

Warning letter issued to Yan Yan over allegations of misinformation

Labour MP And Councillors Hit Back At Misinformation Being Shared Regarding Beam Park Station. – The Havering Daily

Journalist talks modern misinformation at Carmel lecture – Monterey Herald

Australians urged to “Have the Jab Chat” with their GP to help cut through vaccine misinformation

Editors Picks

AI is Capturing Democracy with Fake Citizens, Scientists Warn

April 27, 2026

NBI summons 3 vloggers over false info on Marcos’ health

April 27, 2026

Bangladesh, US to collaborate on combating misinformation: Information Minister |

April 27, 2026

EndSARS Crisis Fueled by Fake News, Not Communication Failure — Lai Mohammed

April 27, 2026

CHT Rani Yan Yan Indigenous Rights Activist Bangladesh | Govt cautions Rani Yan Yan against spreading disinformation

April 27, 2026

Latest Articles

Tejashwi Yadav accuses NDA of misleading women voters in Bihar with false poll promises

April 27, 2026

Warning letter issued to Yan Yan over allegations of misinformation

April 27, 2026

Fake news, disinformation aggravated EndSARS crisis, not communication failure — Lai Mohammed

April 27, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.