Imagine the top legal minds, the kind who rub shoulders with presidents and wrangle national financial titans, suddenly finding themselves in an awkward spot. That’s precisely what happened to Sullivan & Cromwell, a law firm so prestigious it practically has “Wall Street” etched into its very foundations. They had to sheepishly apologize to a federal judge because a document they submitted to court was, well, a bit of a hot mess. We’re not talking about a typo or a misplaced comma; we’re talking about
“hallucinations”—legal terms for made-up facts and fabricated case citations dreamed up by artificial intelligence. It’s like asking a super-smart robot to help you with your homework, and the robot, instead of giving you correct answers, starts inventing historical events and quoting non-existent books. This wasn’t some backroom startup’s blunder; it was a well-established giant of the legal world, caught off guard by the unpredictable nature of AI.
The whole embarrassing ordeal came to light in a U.S. Bankruptcy Court in Manhattan. Andrew Dietderich, a partner at Sullivan & Cromwell, penned a letter to Judge Martin Glenn, expressing deep regret for the colossal screw-up. He revealed that rival lawyers were the ones who first sniffed out the AI-generated errors. The firm then compiled a three-page ledger, detailing about three dozen mistakes—a pretty significant number for a firm of their caliber. Some of these errors were pure fantasy, citing passages from real cases that simply didn’t exist, while others were less dramatic, just clerical errors they claimed weren’t AI-related. But the fabricated case citations were the real headline-grabbers, exposing a deep vulnerability in the supposedly infallible legal process. It’s hard to imagine the collective gasp and the hurried scramble within the firm as they realized the extent of the AI’s creative storytelling.
Sullivan & Cromwell isn’t just any law firm; it’s a behemoth in the legal landscape, steeped in history and prestige. They’ve represented big names, including former President Donald Trump in various appeals, and even had Jay Clayton, who later became the U.S. attorney for the Southern District of New York, as a former partner. This incident was a stark reminder that even the most esteemed institutions are not immune to the pitfalls of rapidly evolving technology. It’s like discovering that even the most meticulously crafted Swiss watch can go haywire if you introduce a faulty component. This apology wasn’t just a simple “oops”; it was a moment of reckoning, a public admission that technology, despite its promises, can sometimes lead even the most experienced professionals down a rabbit hole of misinformation.
This unfortunate episode is part of a larger, unsettling trend that’s sweeping through the legal profession. Lawyers are increasingly turning to AI to sift through mountains of legal research, hoping to save time and resources. But this reliance comes with a significant caveat: AI has a knack for spitting out “legal falsehoods.” This isn’t the first time lawyers have been caught with their metaphorical pants down due to AI. Just last year, a federal judge in Manhattan slapped two lawyers with a $5,000 fine for submitting a brief crammed with made-up cases, all concocted by ChatGPT. The American Bar Association has been urging lawyers to exercise extreme caution when using AI models, emphasizing the need to verify every single result. In his apology letter, Mr. Dietderich admitted that Sullivan & Cromwell’s own internal policies regarding AI use were “not followed,” highlighting a crucial breakdown in their process. It’s a sobering thought: the very tools designed to help streamline and improve efficiency are sometimes creating entirely new and complex problems.
The AI-generated “hallucinations” in Sullivan & Cromwell’s filing were tied to a complex case involving the Prince Group, a Cambodian conglomerate. Its founder, Chen Zhi, is facing serious charges in Brooklyn for allegedly running a global scam operation. When the Prince Group’s British Virgin Islands entities filed for bankruptcy, Sullivan & Cromwell stepped in to represent those overseeing the liquidation of the group’s assets. It was during this process that the AI-fueled errors slipped through. Some of these mistakes were first flagged by lawyers from Boies Schiller Flexner, representing the Prince Group, and made public in a court filing. After this discovery, Mr. Dietderich initiated a thorough review of all other filings in the case, thankfully confirming that the AI hallucinations were isolated to that single document. This meticulous follow-up, while necessary, also speaks to the profound anxiety that these AI blunders are generating within the legal community.
The core of the problem, as revealed by Mr. Dietderich’s letter, seems to be a failure to adhere to the firm’s established protocols. Sullivan & Cromwell reportedly requires its lawyers to undergo a training course before they access any AI tools. The central tenet of this training, a wise dictum for anyone dabbling in AI, is to “trust nothing and verify everything.” This mantra, designed to prevent exactly what happened, was unfortunately overlooked in this instance. It’s a powerful reminder that while technology offers incredible power and potential, human oversight and critical thinking remain absolutely indispensable, especially in fields where accuracy and integrity are paramount. The promise of AI is immense, but its integration into complex professions like law demands rigorous adherence to guidelines and a constant, healthy dose of skepticism. The incident serves as a public service announcement for all professionals: when it comes to AI, user error, or rather, the lack of careful verification, can have profound and embarrassing consequences.

