Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

CBS Errs as It Airs Radical Professor on ‘Misinformation’

June 8, 2025

Congress spreading misinformation on Kaleshwaram irrigation project, says Harish Rao

June 8, 2025

False Hope, Real Harm: How Online Misinformation Endangers Cancer Patients

June 8, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

Stanford Professor Jeff Hancock Allegedly Utilized AI to Reference Fabricated Study

News RoomBy News RoomDecember 3, 20243 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In a contentious legal battle regarding Minnesota’s recent ban on political deepfakes, allegations have surfaced against Stanford University professor Jeff Hancock, known for his expertise in misinformation. Hancock provided an expert declaration in support of the law, which Minnesota Attorney General Keith Ellison has embraced in his arguments. The case has garnered attention due to claims from the plaintiffs, including satirical conservative YouTuber Christopher Kohls, that the legislation infringes on their free speech rights. Hancock’s involvement, however, is now under scrutiny, as his testimony has been accused of relying on a fabricated study that he allegedly generated using artificial intelligence.

The core of the controversy centers around a supposed study cited by Hancock entitled “The Influence of Deepfake Videos on Political Attitudes and Behavior.” Plaintiffs’ attorneys have pointed out that this study does not exist, asserting that while the Journal of Information Technology & Politics is a legitimate publication, it has never featured an article with that name. The declaration that was submitted draws into question the credibility of Hancock’s findings, particularly as the lawyers contend that the pages referenced actually pertain to other, unrelated research. They have suggested that the citation may have originated from a “hallucination,” a term used to describe errors produced by AI language models like ChatGPT.

Further challenges to Hancock’s testimony arose when attorneys conducted extensive searches through popular and academic search engines, including Google and Google Scholar, to locate any record of the alleged study. Their inquiry yielded no results, and they concluded that the article not only lacks any online presence but is likely a product of AI misinterpretation. The plaintiffs expressed concern that Hancock’s failure to verify his citations undermined the integrity of his entire declaration, highlighting the absence of robust methodology or analytical reasoning in his arguments, particularly those relied upon by Attorney General Ellison.

In their 36-page memorandum, the plaintiffs argue that the reliance on this fictitious citation directly questions the value of Hancock’s expert opinion. They contend that if parts of the declaration are fabricated, the overall reliability of the document comes into serious doubt. Consequently, they are requesting the judge to dismiss Hancock’s declaration entirely and to explore the origins of the purported fabrication, asserting that this investigation might necessitate further legal consequences.

As the case unfolds, the implications of AI-generated content in legal contexts are being scrutinized. The incident underscores significant concerns about the accuracy and trustworthiness of AI tools, especially those used in sensitive areas such as legal testimony and expert analysis. The ramifications of this situation could extend beyond the immediate case, potentially impacting how AI and misinformation are perceived and regulated in academic and legal spheres.

Fox News Digital has sought comments from Attorney General Keith Ellison, Professor Jeff Hancock, and Stanford University regarding these serious allegations. As this pivotal case progresses, the intersection of technology, law, and free speech is sparking a larger conversation on the challenges of navigating misinformation in the digital age, particularly in an environment where AI continues to advance and infiltrate various sectors. The outcome not only matters for Kohls and his associates but may also set precedential standards for future cases involving technological ethics and legal accountability.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court

IMLS Updates, Fake AI-Generated Reading Recs, and More Library News

AI can both generate and amplify propaganda

Microsoft-backed AI startup collapses after faking AI services

False SA school calendar goes viral – from a known fake news website

Editors Picks

Congress spreading misinformation on Kaleshwaram irrigation project, says Harish Rao

June 8, 2025

False Hope, Real Harm: How Online Misinformation Endangers Cancer Patients

June 8, 2025

The Miz Addresses Rumors Of WWE Exit

June 7, 2025

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

June 7, 2025

Webinar | Knowing the facts: How communicators can identify and respond to vaccine misinformation – PAHO/WHO

June 7, 2025

Latest Articles

Opinion: Donlin Gold deserves a fair hearing based on facts, not misinformation

June 7, 2025

BRS faults Congress for misinformation campaign on Kaleshwaram project

June 7, 2025

The Truth About Sun Exposure: Doctor Sets the Record Straight amid Influencer Misinformation – People.com

June 7, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.