Okay, let’s dive into this and humanize it a bit. Imagine we’re chatting over coffee, and I’m explaining what’s happening to Shashi Tharoor – someone who, for many, embodies a certain elegance and intellect in Indian politics.
First off, picture Shashi Tharoor – a man known for his incredibly articulate speech, his vast vocabulary, and his smooth, almost academic demeanor. He’s not just a politician; he’s an author, a former international diplomat, and someone who often represents India on the global stage. So, when someone messes with his image, it’s not just a personal attack; it feels like an assault on a symbol. Well, that’s exactly what’s happening. Tharoor has had to go to the Delhi High Court because he’s been caught in the crosshairs of a “sophisticated and malicious” campaign. Imagine waking up to find videos of yourself online that look and sound exactly like you, but you’re saying things you never said – like praising Pakistan, a statement that in the Indian political landscape, especially for someone of his stature, is incredibly inflammatory. These aren’t just badly Photoshopped images; they’re AI-generated “deepfakes,” so realistic that they blur the line between truth and fiction. It’s like a digital doppelganger has been created, and this doppelganger is out there spewing political dynamite, aiming to tarnish his reputation and, as his legal team puts it, question his “patriotic credentials.” For a man who has served as a Minister of State for External Affairs and chaired the Parliamentary Standing Committee on External Affairs, such accusations cut deep, affecting not just his personal standing but potentially India’s image on the world stage.
Now, let’s talk about the court’s response. On a recent Friday, Justice Mini Pushkarna took this very seriously. She didn’t just listen; she issued summons to the big guns – X (what we used to call Twitter) and Meta Platforms (the parent company of Facebook and Instagram), along with the Indian government itself. This isn’t a small request; it shows the court understands the gravity of the situation. More importantly, the judge indicated that she’d issue an “interim order” in Tharoor’s favor. Think of an interim order as a quick, temporary fix – like putting a band-aid on a gushing wound while you figure out a long-term solution. In this case, it means the court is preparing to tell these social media giants to take down these fake videos, pronto. This order is meant to protect what are called his “personality and publicity rights” – essentially, his right to control his own image and how it’s used, especially when it comes to making money or influencing public opinion. It’s a significant step because it acknowledges that an individual’s digital likeness can be gravely harmed by AI, and that such harm requires immediate judicial intervention.
Imagine being Shashi Tharoor’s lawyer, Amit Sibal, standing in court, trying to explain how devastating this is. He emphasized that these aren’t just random, isolated incidents. Unknown individuals or groups are repeatedly hijacking Tharoor’s distinctive face, his eloquent voice, and even his characteristic gestures – all the things that make him, him – to create these incredibly convincing, yet utterly false, audio-visual pieces. Sibal highlighted Tharoor’s past as a former external affairs minister, arguing that this isn’t just about one man; it “matters to India’s standing as well.” He raised a chilling point: “It is liable to be misused by foreign states.” Think about the potential for international mischief – if a respected Indian figure can be made to “say” things that align with a rival nation’s agenda, it could create diplomatic headaches and sow confusion on a global scale. Even reputable news organizations, like India Today, have stepped in, publicly flagging these videos as fake. But here’s the kicker: despite these warnings, the content keeps circulating. It’s like trying to stop water from flowing uphill; you remove one fake video, and another pops up, leaving a lingering, false impression in the minds of the public. This persistent re-emergence of the false content highlights the hydra-headed nature of digital disinformation, making it incredibly difficult to contain without systemic legal mechanisms.
According to the lawsuit, this whole deepfake saga isn’t new; it dates back to around March 2026. Yes, that’s in the future. There’s likely a typo in the original text, and it probably means March 2023 or 2022, a critical period when Tharoor was heavily involved in campaigning for the Kerala Legislative Assembly elections. This detail is crucial because it paints a picture of deliberate timing. This wasn’t some random prank; it was a targeted attack during a sensitive political moment. The legal argument is that this unauthorized “cloning and exploitation” of Tharoor’s likeness wasn’t just malicious; it was a calculated attempt to twist public perception and illegally meddle with the democratic process. The people behind this, the lawsuit alleges, “weaponized machine learning” – they used AI to precisely replicate Tharoor’s unique way of speaking, his vocabulary, his specific mannerisms, making the deepfakes incredibly convincing and, therefore, incredibly damaging. It’s like they studied him, learned his digital DNA, and then created a perfectly mimicked impostor to spread lies. This level of sophistication demonstrates a worrying evolution in strategies engineered to undermine public trust and the integrity of electoral processes.
What makes this even more frustrating is how elusive these deepfakes are. While some of the offending URLs – the direct links to these fake videos – have been taken down in the past (after police complaints and formal grievances under India’s IT Rules), the problem is they don’t stay down. As Amit Sibal pointed out, the content frequently resurfaces, popping up on new links, new platforms, or re-edited slightly to bypass detection. It’s a constant cat-and-mouse game. This Friday, there was a small victory: counsel for Meta informed the court that some of the problematic content on Instagram had finally been made inaccessible. But one step forward often feels like two steps back when you’re battling an invisible enemy that keeps mutating. This relentless re-emergence underscores the need for robust platform accountability and proactive measures, rather than reactive takedowns, to genuinely tackle the propagation of such harmful content.
Shashi Tharoor, unfortunately, isn’t alone in this digital nightmare. He’s joining a growing list of prominent individuals who are seeking legal help against the misuse of their image through AI. The Delhi High Court has actually been quite active in this area, granting similar interim relief to protect the “personality rights” of a diverse group of public figures. We’re talking about Bollywood heavyweights like Aishwarya Rai Bachchan, Abhishek Bachchan, Salman Khan, Sonakshi Sinha, Allu Arjun, and Vivek Oberoi. It’s not just actors; athletes like cricketer Gautam Gambhir, spiritual leaders like Sri Sri Ravi Shankar, and even political figures like Andhra Pradesh Deputy CM Pawan Kalyan, alongside various journalists and podcasters, have had to go to court for similar reasons. This trend highlights a critical juncture for society: as AI technology becomes more accessible and sophisticated, the legal frameworks need to catch up, quickly, to protect individuals from digital identity theft and the weaponization of their likeness. The court’s upcoming interim order for Tharoor is expected to set a new precedent, providing a much-needed framework for the immediate and widespread removal of these deepfakes across all major digital platforms, hopefully offering a clearer path for others facing similar attacks.

