It’s truly alarming to see how technology, especially artificial intelligence, is being weaponized to undermine democracy and target specific communities. A recent, comprehensive report by the Diaspora in Action for Human Rights and Democracy (DAHRD) has pulled back the curtain on what they describe as the most sophisticated AI-driven disinformation and exclusion operation ever recorded in an Indian election. This isn’t just about a few doctored images or misleading tweets; it’s a meticulously orchestrated campaign ahead of the April 9, 2026, Assam Assembly elections, designed to manipulate public opinion and disenfranchise the state’s Bengali-speaking Muslim community. Imagine a carefully constructed web of digital propaganda, legislative maneuvers, and administrative actions, all working in concert to create a false reality, painting an entire group of people as outsiders and threats. This isn’t just politics as usual; it’s a chilling glimpse into a future where advanced technology can be used to erode the very foundations of a fair and inclusive society.
The sheer scale of this operation is staggering. DAHRD’s investigation uncovered a multi-layered content ecosystem that churned out 432 confirmed AI-generated posts, cumulatively racking up over 45.4 million views. Picture this: an industrial-scale disinformation factory, not just spreading misinformation, but actively crafting an altered reality where a whole community is simultaneously dehumanized and stripped of their rights. One of the most disturbing findings is what the report calls a “propaganda-to-policy pipeline.” This is where fabricated narratives, such as the alarming concept of “Land Jihad,” are manufactured and then directly translate into actual legislation. We’re talking about real laws being passed based on pure fiction. A stark example is the February 2026 property restriction law that explicitly barred land sales to Muslims in certain areas. This isn’t just about online rhetoric; it’s about how that rhetoric, amplified by AI, can directly impact people’s lives, their ability to own property, and their sense of belonging in their own land. It’s a terrifying precedent where digital lies pave the way for real-world discrimination.
Adding another layer of unsettling evidence, the report meticulously tracks a deliberate shift in language by Chief Minister Himanta Biswa Sarma. He’s reportedly been rotating terms – from “Miya” to “Bangladeshi” to “Encroachment” – all seemingly to avoid legal and constitutional accountability while still targeting the same community. It’s a linguistic sleight of hand, designed to create distance from legally problematic terms while maintaining the same discriminatory intent. In a telling interview on March 12, Sarma seemingly confirmed this strategy, acknowledging that earlier content was “constitutionally and legally wrong” because it hadn’t used the word “Bangladeshi.” This revelation shines a harsh light on the calculated nature of the campaign, indicating a deliberate attempt to circumnavigate legal safeguards while continuing to demonize a specific group. It shows a disquieting awareness of legal boundaries, not to uphold them, but to cleverly skirt around them for political gain, further isolating and marginalizing a significant portion of the population.
Beyond the disturbing rhetoric, the report exposes a systematic “exclusion architecture” – a four-pronged attack on the Bengali-speaking Muslim community. Imagine being hit from every angle: first, being dehumanized through sophisticated AI that paints you as an enemy; then, witnessing nearly a quarter of a million names (approximately 2.43 lakh) mysteriously vanish from voter rolls through a “Special Intensive Revision” process, effectively robbing people of their democratic voice. But it doesn’t stop there. This exclusion takes a terrifying physical form with forced evictions of residents, often celebrated and glorified on social media, adding insult to injury. And as if that weren’t enough, there’s a chilling attempt to erase their cultural heritage, exemplified by the targeting of historical figures like the Sufi saint Azan Fakir from Assamese history. This isn’t just about winning an election; it’s about systematically dismantling a community’s identity, their right to participate in society, and even their historical presence, leaving them vulnerable and without a voice.
The tentacles of this disinformation campaign reached the highest echelons of power, with DAHRD directly implicating high-level political figures in the spread of these fabrications. The report documents 31 confirmed deepfakes aimed at opposition candidate Gaurav Gogoi, including a shockingly brazen video from a verified Cabinet Minister’s handle that falsely branded Gogoi as a Pakistani agent. This isn’t just political mudslinging; it’s a deliberate attempt to destroy a political opponent’s reputation and legitimacy through outright fabrication. And the manipulation didn’t spare their personal lives either. The campaign stooped to unprecedented gendered disinformation, targeting Gogoi’s wife, Elizabeth Colburn – a private individual – with at least six AI-fabricated intimate and communal scenarios. Imagine the emotional toll and reputational damage of such a vicious, AI-driven smear campaign, especially when it targets someone not even directly involved in politics. It reveals a disturbing willingness to cross all ethical lines, using cutting-edge technology to inflict maximum damage and silence dissent.
Perhaps the most distressing aspect of this entire exposé is the glaring institutional failure it highlights. Despite a staggering 119 documented breaches of the Model Code of Conduct, with 84 of them classified as high severity, the Election Commission of India reportedly took no enforcement action whatsoever. And the social media platforms, the very arteries through which this toxic content flowed, executed zero content takedowns. It’s as if the watchdogs were asleep at the wheel, allowing this sophisticated disinformation machine to operate with impunity. DAHRD issues a grim warning: Assam, they argue, has served as a testing ground, a laboratory for these dangerous techniques. They point to the alarming fact that a similar administrative architecture has already been deployed to suspend over 10 million voters in West Bengal, suggesting a scalable and repeatable model for voter disenfranchisement. The report’s conclusion is a chilling wake-up call: without urgent and decisive intervention, the ever-widening chasm between the unchecked production of AI-generated propaganda and the slow pace of democratic accountability will inevitably compromise the integrity of the 2029 general elections, threatening the very future of fair and free democratic processes.

