In an era increasingly blurred by the lines between reality and fabrication, a seemingly innocuous image of India’s Prime Minister, Narendra Modi, holding a collection of coconuts, gained significant traction online. This picture, circulating amidst the fervent campaigning for the Kerala Assembly elections, was presented as a genuine snapshot from his visit, painting a relatable and perhaps even slightly folksy image of the leader engaging with the local culture. The accompanying narrative by many online users suggested it was a scene from his 2026 election trail in Kerala, with photographers diligently capturing the moment. For a casual scroll, it might have passed as just another political photo-op. However, beneath the surface of this seemingly ordinary image lay a fascinating and increasingly prevalent truth: it was entirely artificial, a digital construct born from the sophisticated algorithms of Google AI.
The image’s rapid spread and the accompanying claims quickly caught the attention of fact-checkers. Vishvas News, a leading fact-checking organization, embarked on a thorough investigation to ascertain the authenticity of the viral picture. Their initial steps involved standard digital forensics: searching Google with relevant keywords and employing reverse image searches. Despite these efforts, a striking absence of any corroborating information or credible news reports featuring such an event emerged. A meticulous check of PM Modi’s official social media channels, typically a prime source for his engagements, also yielded no mention or visual evidence of him holding coconuts in Kerala. This immediate lack of factual grounding raised the first red flag, shifting the suspicion towards the possibility of the image being digitally generated.
The investigation then delved deeper, employing advanced tools designed to detect AI-generated content. Vishvas News utilized Google’s own ‘SynthID’ detector, a cutting-edge technology developed to watermark and identify digital creations originating from Google’s AI models. The results were conclusive and stark: the SynthID detector confirmed the presence of ‘SynthID’ in the image with “Very High SynthID confidence.” This finding was a critical piece of evidence. SynthID functions as an invisible digital ink, embedded within AI-generated images, imperceptible to the human eye but readily detectable by specialized tools. Its presence unequivocally established the image as an AI creation rather than a photograph captured by human hands. To further solidify their findings, another independent tool, Hive Moderation, was employed. This second analysis independently corroborated the initial findings, indicating with a 99% probability that the image was indeed AI-generated.
The implications of such a sophisticated digital fabrication are significant, particularly in the highly charged atmosphere of an election. When confronted with the evidence, BJP spokesperson Vijay Sonkar Shastri confirmed these suspicions. He articulated a growing concern within political circles, highlighting how opposition parties are increasingly resorting to AI-generated images to influence public perception and, in this case, “tarnish the BJP’s image in the elections.” Shastri underscored that this wasn’t an isolated incident, referencing similar fake images that had previously circulated concerning PM Modi’s visits to other states like Assam. This revelation casts a stark light on the evolving tactics of political campaigning, where technological prowess in disinformation can become as influential as traditional outreach.
The broader context of these fabricated images is woven into the intricate tapestry of India’s electoral landscape. At the time of this image’s viral spread, the Election Commission of India (ECI) had keenly announced the schedule for the Assembly elections across several key states, including Assam, West Bengal, Tamil Nadu, Kerala, and the Union Territory of Puducherry. Specifically, voting in Kerala was slated for a single phase on April 9, 2026. The atmosphere was ripe for political narratives, genuine or otherwise, to take hold. A report from Dainik Jagran on April 4, 2026, confirmed PM Modi’s actual visit to Kerala, during which he held a public meeting and leveled criticisms against the Left parties. The existence of a real visit provides fertile ground for AI-generated images to blend seamlessly with real events, making detection harder for the unsuspecting public. Furthermore, the ECI’s announcement on March 15, 2026, from Vigyan Bhawan in New Delhi, clearly laid out the election timelines, emphasizing the high stakes and intense political maneuvering. Adding another layer of recent news, in February 2026, the Indian government had approved the official renaming of Kerala to ‘Keralam,’ a detail that grounds these political activities within a dynamically evolving cultural and linguistic context, further showing the meticulous environment in which AI deception thrives.
Ultimately, the investigation by Vishvas News provided a clear and unequivocal conclusion: the viral image of PM Modi holding coconuts was not a genuine photograph but an artificial construct of Google AI. The analysis of the Facebook user, Javed Ahmed, who shared the image with the caption “Apparel Minister – getting the reel made, Keralam – Keralam… is there any bigger actor than him in the whole world?”, further illustrated how these fabricated images are disseminated. Describing himself as a resident of Srinagar with over 1000 followers, the user’s history of sharing ideologically specific posts suggests a deliberate intent behind the dissemination. This episode serves as a powerful cautionary tale in the digital age, highlighting the urgent need for media literacy and critical thinking. As AI technology becomes increasingly sophisticated and accessible, the ability to discern truth from sophisticated fabrication will be paramount, particularly in spheres as influential as politics, where perception can often dictate reality. The fight against fake news is no longer just about textual misinformation but has expanded into the realm of hyper-realistic visual deception, demanding heightened vigilance from individuals and robust fact-checking efforts from organizations dedicated to preserving the integrity of information.

