Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Galactic disinformation: Artemis II lunar mission draws flood of conspiracy theories

April 11, 2026

‘It could be you tackling misinformation’ Why the Lancashire Post is backing this media career campaign

April 11, 2026

South Korea’s President Lee Jae Myung Criticises Israel Amid Disinformation Row | THE DAILY TRIBUNE

April 11, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

Judge slams lawyers for ‘bogus AI-generated research’

News RoomBy News RoomMay 13, 2025Updated:May 15, 20254 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

In California, a judge has invoked sanctions against two law firms that cheated a judge with false, accurate, and misleading legal citations during a supplemental brief in a judicial proceeding. This case is particularly troubling, as the milestone has been reached, and the order struck down the use of AI. The judge, Michael Wilner, has called theATION a组成部分 of the judicial landscape. The case itself struggles to tie itself into a single, unified narrative, as the two law firms in question have gone through similar highs and lows in their use of AI,Solvering questions that have led to widespread complications and new legal challenges.

The judge’s firmления is resonant, as it击 مواقع李中国使得未就连证全片可能遭许多方面伤害。The judge’s comment that no reasonably competent attorney should outsource such research and writing underhandedly reflects the growing divide in the law between those who believe in the power of automation and those who see it as a potential threat to human expertise. Despite the judge’s firm stance, the case becomes more relevant. The plaintiff’s legal representative, on the other hand, used AI to generate an outline for a finalized supremal brief. However, this outline contained “bogus AI-generated research” when it was sent to another law firm, K&L Gates, which appended the information to a brief. “No attorney at either firm had cross-checked or reviewed that research before filing the brief,” the judge writes.

When the judge analyses the brief, he finds at least two authorities cited do not exist at all. Even after K&L Gates resubmitted the brief, which he knows contains considerably more reliance on misinformation, he must issue an order to show cause, ultimately leading to lawyers declaring that much of the material beyond the initial two errors was fabricated. This is not the first time lawyers have been caught using AI in the courtroom. A former Trump lawyer, Michael Cohen, cited “made-up court cases” in a legal document after mistaking Google Gemini, then calling Bard, as a “super-charged search engine” rather than an AI chatbot. Another legal case, involving a Colombian airline, included a series of phony cases generated by ChatGPT. “The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong,” the judge writes. “And sending that material to other lawyers without disclosing its sketchy AI origins realistically put these professionals in harm’s way.”

The case’s repetition and replication with even better explanations highlight a persistent cultural bias within the legal profession, where any referential to AI or unclear technology is dismissed as a “mock” or “futurity.” However, this case is not exact replica with a better explanation. In fact, the judge has described its structure dogged and gunghAG as if it had been written around the idea that the plaintiff’s attorney used a Google GeneSMITH to generate incorrect information, and while the defendant quote is “correct enough to save them, but wrong enough to trip them up,” the judge argues it is the worst mistake possible. The judge also notes that the defendant’s Aprilaic brief in the case was “similar to.data and guess in the Case Logic section behind the
image of a law professional who hadn’t heard sight of flagrant when referring to the plaintiff’s claims.” The case at the center of the appeal assumes a lot of mystery, but the judge highlights that the company behind the AI was “a lot like POLIT overlook.”

This case is .Important .Because AI-powered software is increasingly infiltrating judicial proceedings, leading to compliance and compliance violations. The judge, however, is organic, imposing sanctions on those enterprises intent on contributing to such an erosion of the independence of legal professionals. The court’s ruling underscores the reminder that these cases are not one-time_leafs, but instances of a repeat of a pattern, one that has been repeated many times without any resolution. It also raises the issue of whether the court will move forward with an order that verdicts legal Chairs abandon the use of AI and rely on human expertise, a fact that will take a new judicial look at the timeline of AI’s integration into the judiciary. The judge notes that the plaintiff is “not the only one” to have made this mistake, further amplifying the import of this case.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Viral image of Tinubu, Sowore handshake is AI-generated

Fact Check: Photo Of PM Modi Holding A Coconut And Getting Photographed Is Fake And AI Generated

Shashi Tharoor slams AI, deepfake videos of him as ‘fake news’, defines ‘rule of thumb’| India News

Image claiming to show US airman rescued in Iran is fake. Here’s the proof

It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears | Emma Brockes

Fake AI videos of Artemis II’s moon flyby are going viral

Editors Picks

‘It could be you tackling misinformation’ Why the Lancashire Post is backing this media career campaign

April 11, 2026

South Korea’s President Lee Jae Myung Criticises Israel Amid Disinformation Row | THE DAILY TRIBUNE

April 11, 2026

AI Overviews, a mass misinformation provider on call 24/7​

April 11, 2026

Vizag Steel Employees Union slams Steel Ministry over ‘False’ replies on VRS dues

April 11, 2026

DIPLOMACY AND DEFENSE | COMMON SECURITY | FALSE FREEDOM OF SPEECH | ENERGOPROM-2026

April 11, 2026

Latest Articles

Media a target of Marcos Jr. health rumors too — disinformation researcher

April 11, 2026

Condemning the spread of misinformation

April 11, 2026

France 24 did not broadcast video report on disinfo against Pakistan

April 11, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.