Imagine being a lawyer for nearly four decades – a seasoned professional, respected for your experience. Now, picture being repeatedly called out, not for a lack of legal acumen, but for something entirely new, something that feels a bit like a prank gone wrong, but with serious consequences. That’s the story of Raja Rajan, a Cherry Hill attorney who found himself in hot water, again, for using artificial intelligence in a way that’s causing headaches in the legal world.
This isn’t just about a typo or a forgotten footnote. Rajan used AI to generate case citations for his court filings, but the AI, in its infinite wisdom (or perhaps, lack thereof), simply made them up. The legal community has a name for this: “AI hallucinations.” It’s like asking a brilliant but slightly mischievous child to recall a story, and instead of remembering facts, they start inventing details that sound plausible but aren’t real. The judge overseeing Rajan’s case, Kai N. Scott, didn’t find it amusing.
The judge ordered Rajan to pay a $5,000 fine and, perhaps more significantly, to go back to school. He needs to take courses specifically on AI and legal ethics, and prove he’s already taken other relevant classes. This isn’t a small sum for Rajan. He’s already shelled out over $73,500 for various other violations, including a previous $2,500 fine for similar “fake citation” blunders. It’s a painful reminder that even the most experienced professionals can stumble when new technology enters the fray.
What makes this situation particularly telling is that Rajan had asked for a much smaller fine, a mere $950. He even had the audacity to suggest this after admitting he couldn’t explain why he hadn’t bothered to verify the accuracy of the citations before submitting them to the court. Judge Scott, understandably, wasn’t having it. Her rejection of the reduced fine sending a clear message: repeating the same mistake, especially one that undermines the integrity of the legal process, won’t be met with a slap on the wrist.
It’s a stark illustration of the growing pains we’re experiencing as AI becomes more integrated into our lives and professions. While AI can be a powerful tool, it’s not a substitute for human diligence and critical thinking. For lawyers like Rajan, this means understanding that relying blindly on AI, especially for something as fundamental as legal precedent, can lead to embarrassing and costly consequences. It’s a wake-up call for the legal community: embrace AI, but always, always verify its output.
Ultimately, this ongoing saga with Raja Rajan is more than just a cautionary tale for one lawyer. It speaks to a broader challenge facing many professions in the age of AI. How do we leverage the immense power of artificial intelligence without sacrificing accuracy, ethics, and fundamental human responsibility? Rajan’s repeated sanctions serve as a potent reminder that the human element – the critical analysis, the verification, the ethical compass – remains indispensable, even when surrounded by the most sophisticated algorithms.

