This story is truly a wild ride, a mix of a spy thriller, a tech-savvy prank, and a legal drama. At its heart are two brothers from the Judean Hills, now facing serious espionage charges in Israel. But this isn’t your typical spy novel; it’s a tale punctuated by ChatGPT, fake identities, and a surprising twist of patriotism.
Let’s start with the central figure, a younger brother, whose name hasn’t yet been released to the public. He’s been accused of acting as an agent for Iran, along with his older brother. The sum involved? A hefty NIS 100,000, which he allegedly received from Iranian handlers. The shocking part is that much of the information he supplied was, in fact, fabricated, a cunning concoction of his own making, heavily aided by the latest in AI technology: ChatGPT, Grok, and Gemini. Yes, you read that right – AI in the service of alleged espionage, or perhaps, counter-espionage, depending on whose side of the story you believe.
The legal proceedings have been under a strict press embargo, only recently lifted, revealing the intricate web of deception. It all began innocently enough, or so it seemed, in August. The main suspect received a Telegram message, a seemingly innocuous offer to “make money.” But he was no fool; he suspected the sender was an Iranian agent. Instead of recoiling, he leaned in, embracing a persona. He adopted a fake name, spun a yarn about being a bright computer science student on the verge of joining the elite IDF Intelligence Directorate’s Unit 8200 – a unit renowned for its signal intelligence. This was his first masterstroke: he gave his handler exactly what they wanted to hear.
His deception deepened as he invented a fictional friend within Unit 8200. This “friend” was based on a real person whose identity card and driver’s license he’d found online. He even created fake conversations, complete with screenshots, to convince the Iranian agent that he was diligently trying to recruit his “friend.” The game escalated when he set up a Telegram group, bringing in the Iranian agent, himself, and his fictional “soldier” into a seemingly authentic exchange. When the agent demanded verification of the “soldier’s” identity, the accused initially sent an unrelated Israeli citizen’s video and driver’s license. Unsatisfied, the agent pressed for a photo of the “soldier” holding an ID. Our tech-savvy protagonist then used AI to create a convincing image of his fake soldier making an “okay” hand gesture. Further demands for proof of service in Unit 8200 were met with a doctored document he found online, carefully edited to include his fabricated soldier’s details.
The narrative takes a darker turn when Iranian President Ebrahim Raisi’s helicopter crashed. The Iranian agent, presumably seeking confirmation and possibly Israeli involvement, contacted the defendant. The defendant, using ChatGPT, concocted a document detailing Israel’s fabricated role in the incident. When the agent continued to press for details about the fictional soldier, the defendant again turned to ChatGPT, generating answers about Unit 8200 and the soldier’s role, compiling them into a secure PDF. Later, after learning about mapping activities from a real military acquaintance, the defendant updated the agent, claiming his “soldier” was in Unit 8200’s mapping department, using “advanced AI and satellite assistance.” This led to him using Google Maps to identify various targets across Iran—from the Tehran airport to a senior official’s residence and a suspected weapons factory—and relaying these coordinates to the agent. Amidst Iranian protests, he even used Gemini to identify symbolic pro-regime sites for a potential US strike, further highlighting his impressive, if unsettling, use of AI.
Beyond the elaborate AI-powered charade, the defendant also spun a completely false narrative about an Iranian citizen he found on Telegram. He alleged this citizen was collaborating with Israel in the assassination of Iranian regime officials. Using Grok, another AI tool, he created a detailed backstory, claiming the Iranian could operate a drone, ride a motorcycle, and was recruited via social media. This fabrication sadly led to the real citizen’s arrest, although they were later cleared. In a final act of alleged deception, the defendant overheard a military acquaintance discussing a potential targeting of Iranian infrastructure. He then used this tidbit, combined with information from Telegram channels, to tell his Iranian handler about an impending Israeli-American attack, even specifying their roles.
What makes this story even more compelling is the defense’s audacious claim. The defendant’s lawyer vehemently argues that this is “an outrageous indictment.” He portrays the brothers not as spies, but as “patriotic Zionist brothers who sought to trick the Iranians.” In his words, “The Jewish mind is known for inventing patents, and as loyal sons of the Start-Up Nation, they sold fabricated information to the Iranians using ChatGPT in exchange for money.” He concludes by suggesting the brothers deserve the “Israel Prize for their contribution to the nation’s security,” essentially reframing their actions as a sophisticated, AI-driven counter-intelligence operation. This narrative offers a fascinating, albeit legally precarious, perspective on the intersection of modern technology, international conflict, and individual agency, leaving the court, and the public, to grapple with the true nature of their intent.

