Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Local creatives fight misinformation with new history and culture podcast

April 7, 2026

Cyabra Displays Industry Leading Disinformation Solution as

April 7, 2026

Adongo Denies ‘Unfair Treatment’ Claims, Slams Online Publication as False

April 7, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Disinformation
Disinformation

Ahead of Assam polls, AI-generated disinformation targeted Muslims, state Congress chief: Study

News RoomBy News RoomApril 7, 20267 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

The Invisible War Against Empathy: How AI Disinformation is Rewriting India’s Story

The dust is still settling from the Assam Assembly elections, but beneath the surface of traditional political rhetoric, an entirely new and insidious form of warfare was waged – one not with bullets and bombs, but with algorithms and fabricated realities. Tucked away in a report by the Foundation Diaspora in Action for Human Rights and Democracy, a chilling revelation emerged: the 2021 Assam polls were the unfortunate stage for India’s “first industrialized artificial intelligence disinformation operation” in a state election. This wasn’t merely about political mudslinging; it was a deeply unsettling campaign designed to redefine who belongs, who is human, and who has a right to exist within the vibrant tapestry of India.

Imagine a carefully constructed world, not carved from stone or sculpted from clay, but woven from pixels and lines of code. In this world, an entire community, the Muslims of Assam, were systematically targeted in a multi-pronged assault that sought to render them invisible, unheard, and ultimately, erased from the collective memory of their own land. The report paints a stark picture of a strategy designed to achieve what it calls the “simultaneous dehumanization, disenfranchisement, displacement, and erasure from cultural memory” of Muslims. This wasn’t a sudden, spontaneous outburst of prejudice; it was a meticulously planned operation, like a well-oiled machine, churning out narratives that poisoned the well of public discourse. The use of AI in this context is what distinguishes it as a particularly dangerous evolution in political manipulation. It moves beyond the limitations of human production, allowing for the widespread, rapid, and often undetectable fabrication of content that blurs the lines between truth and fiction.

The core of this digital assault revolved around the weaponization of “deepfakes” and AI-generated communal content. Picture a political leader, well-known and respected, suddenly appearing in a video, their face and voice eerily convincing, uttering words they never spoke. In Assam, this nightmare became a reality for the state Congress chief, Gaurav Gogoi. The report identified a staggering 31 confirmed deepfakes, meticulously crafted to portray him as a “Pakistani agent and Muslim sympathizer.” This wasn’t just about discrediting a political opponent; it was about painting an ordinary Indian citizen, a prominent public figure, as an “other,” an enemy from across the border, simply for expressing empathy or advocating for the rights of a minority group. The insidious nature of deepfakes lies in their ability to bypass our innate human skepticism, to create a sense of direct witness that can be incredibly difficult to dislodge, even when presented with evidence of fabrication. These aren’t just misleading headlines; they are full-sensory experiences, designed to create a visceral reaction of distrust and fear.

What makes this situation even more concerning is the apparent lack of accountability from the very institutions designed to safeguard democratic processes and uphold ethical conduct. The report documented a shocking “119 breaches” of the model code of conduct – the set of guidelines meant to ensure fair and ethical campaigning during elections – yet, the Election Commission, the apex body responsible for overseeing elections, took “no action in any of such cases.” This silence, this inaction, sends a chilling message: that in the face of sophisticated digital manipulation, the traditional mechanisms of oversight are either ill-equipped or unwilling to act. Even social media platforms, the very conduits through which this disinformation flowed, remained passive. Imagine platforms like Facebook and Instagram, with their vast resources and supposed commitment to combating misinformation, allowing deeply harmful, AI-generated content to proliferate without any “takedowns” or even simple “labels” indicating its artificial origin. This creates a fertile ground for the erosion of trust, not just in political figures, but in the very information we consume daily.

The alarming aspect of this digital experiment in Assam is that it’s not an isolated incident; it’s a blueprint, a “laboratory” for a wider, more dangerous trend across India. The report explicitly warns that the “model implemented in Assam – voter roll purges, demographic engineering and AI-generated communal content – was being replicated in other parts of the country, including poll-bound West Bengal.” This is where the human impact becomes even more profound. Think of the ordinary citizens, the Matuas, Rajbanshis, and other minority communities in West Bengal, who are now facing the threat of “deletion from the voter rolls.” This is not a bureaucratic oversight; it’s a systematic effort to disenfranchise groups based on their identity, to strip them of their fundamental right to participate in their own democracy. The study revealing that “95% of deleted voters in Nandigram are Muslims” illuminates the discriminatory intent behind these “intensive revision exercises.” This isn’t just about winning elections; it’s about fundamentally altering the demographics and political landscape of a nation through the cynical manipulation of bureaucratic processes and the powerful amplification of AI-generated prejudice.

The report unveils what it calls an extensive “disinformation architecture,” a carefully constructed ecosystem of synthetic images, deepfake videos, and AI-generated communal content, all deployed with strategic precision ahead of the polls. Consider the sheer scale of impact: 432 posts on Facebook and Instagram, “very likely” or “likely” to have been AI-generated, collectively garnering an astonishing “45.4 million views and more than 1 lakh likes.” These aren’t just isolated anomalies; they represent a tidal wave of fabricated narratives, washing over the minds of millions, shaping their perceptions and influencing their choices. One Instagram account, “politooons,” stands out as a particularly potent example, single-handedly generating “40.2 million views from 102 AI-generated posts,” accounting for a staggering “88% of all such content views.” This illustrates the power of a centralized, well-resourced operation to disseminate disinformation on an unimaginable scale, eclipsing traditional media outlets in its reach and potential influence.

Beyond the deepfakes targeting individuals, there was an even more chilling use of AI to promote overt communal violence. The report highlights a deeply disturbing incident involving the Assam BJP, which uploaded and later removed a post depicting Chief Minister Himanta Biswa Sarma “symbolically firing at images of two Muslim men at point-blank range.” This wasn’t merely a political attack; it was a clear incitement to violence, a visual representation of targeting and dehumanization. The video cleverly combined original footage of the Chief Minister handling rifles with AI-generated images of Muslims as targets, blurring the lines of reality and creating a powerful, dangerous message. Even more concerning was Sarma’s subsequent interview where he acknowledged the video was “correct” but should have identified the men as Bangladeshis. This isn’t a retraction or an apology; it’s a deliberate linguistic pivot, shifting the target from “Miya” Muslims to “Bangladeshis,” a subtle legal adjustment that maintains the underlying hateful intent.

The term “Miya,” a derogatory label exclusively directed at Bengali-origin Muslims in Assam, highlights the deep-seated prejudice at play. These are people who migrated during the colonial era, now often falsely accused of being undocumented migrants from Bangladesh. To call them “Miya” is not just to insult; it is to brand them as outsiders, as non-citizens, as undeserving of rights and belonging. Ahead of the elections, official social media accounts of Sarma himself and other cabinet ministers were used to disseminate content “calling for the exclusion and economic boycott of ‘Miya’ Muslims.” These hateful calls were then “amplified through paid media at scale,” transforming isolated acts of bigotry into widespread campaigns of discrimination. This is not just about words; it’s about creating an environment where an entire community is systematically marginalized, their livelihoods threatened, and their very existence questioned. It’s a deliberate act of fracturing society, of turning neighbor against neighbor, and of weaponizing technology to achieve a deeply divisive political agenda. The Assam election wasn’t just a political contest; it was a disturbing glimpse into a future where the truth is fluid, where empathy is systematically dismantled, and where the very essence of human rights is under attack from the invisible hand of industrialized AI disinformation.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Cyabra Displays Industry Leading Disinformation Solution as

‘Trad Wife’ Influencers Spreading Far Right Disinformation Against Birth Control

Russia trained 1,000 influencers in Latin America to spread disinformation, according to report

Russia launches disinformation campaign after Ukraine’s gains in Middle East

Russia Escalates Disinformation Push as Ukraine Deepens Security Role in Gulf — UNITED24 Media

“We expect more nonsense”: Sybiha warns of Russian fakes targeting Ukraine’s Gulf defence role

Editors Picks

Cyabra Displays Industry Leading Disinformation Solution as

April 7, 2026

Adongo Denies ‘Unfair Treatment’ Claims, Slams Online Publication as False

April 7, 2026

Africa’s voice in global journalism grows as funding, AI and misinformation shape newsrooms

April 7, 2026

Ahead of Assam polls, AI-generated disinformation targeted Muslims, state Congress chief: Study

April 7, 2026

Namyangju Police Ignore Reports, Submit False Reports – 조선일보

April 7, 2026

Latest Articles

MOF: Public cannot apply for extra BUDI95 subsidy, claims false

April 7, 2026

Can California Protect Voter Trust? The Biggest Threats to the 2026 El…

April 7, 2026

‘Trad Wife’ Influencers Spreading Far Right Disinformation Against Birth Control

April 7, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.