2024: A Year Defined by AI-Driven Disinformation and Election Misinformation

The year 2024 witnessed a significant surge in AI-generated misinformation, reaching an unprecedented peak according to an extensive analysis conducted by BOOM. Fabricated images, manipulated videos, and cloned voices were weaponized to spread false narratives, impacting everything from health and finance to political discourse. This alarming trend underscores the growing threat of AI-powered disinformation and its potential to manipulate public opinion and erode trust in legitimate sources of information. BOOM’s fact-checking efforts revealed a complex landscape of misinformation, with the 2024 Lok Sabha elections serving as a major catalyst for the spread of false and misleading claims.

BOOM published 1,291 fact-checks in 2024, addressing a wide range of topics. Of these, 108 fact-checks, representing 8.35% of the total, were specifically related to AI-generated misinformation. This marks a substantial increase of over 11% compared to 2023, highlighting the rapid growth of AI’s misuse in spreading disinformation. The Lok Sabha elections dominated the misinformation landscape, followed by Islamophobic narratives and disinformation surrounding the Bangladesh protests. The Muslim community in India remained a primary target, although instances of targeted misinformation against them saw a marginal decrease compared to the previous year. A significant portion (40.7%) of the fact-checks involved the recirculation of old and debunked claims, demonstrating the persistence of misinformation and the need for ongoing fact-checking efforts.

AI-powered misinformation took various forms, including fabricated images, deepfake videos and audio, and cloned voices. Fake investment scams utilizing deepfakes and voice clones of prominent figures like Nirmala Sitharaman, Manmohan Singh, and Mukesh Ambani were rampant. These scams often involved fabricated endorsements of cryptocurrency platforms or other investment schemes, preying on unsuspecting individuals. The Lok Sabha elections also became a breeding ground for AI-generated misinformation, with deepfakes and voice clones targeting political leaders like Rahul Gandhi, Narendra Modi, and Mamata Banerjee, as well as celebrities like Ranveer Singh and Aamir Khan. These fabricated materials aimed to manipulate public perception and influence voting behavior.

AI-generated misinformation also permeated the health sector, with fake endorsements of diabetes cures and arthritis treatments circulating widely. Celebrities, news anchors, and politicians were impersonated using voice cloning and deepfake technology to promote these bogus remedies. Often, these fake endorsements appeared in sponsored videos on social media platforms, further amplifying their reach and potential impact. Virat Kohli emerged as the most frequent target of AI-generated misinformation, with fabricated images and videos falsely depicting his involvement in various events and endorsements. Mukesh Ambani and news anchor Rajat Sharma were also prominent targets.

The 2024 Lok Sabha elections played a pivotal role in the spread of misinformation. BOOM published 307 fact-checks related to the elections, with false narratives continuing to circulate even after the polls concluded. Rahul Gandhi was the most frequent target, facing 24 fact-checks, primarily in the form of smear campaigns. Narendra Modi was the subject of 19 fact-checks, acting as both a target and a source of misinformation. State Assembly elections in Maharashtra, Haryana, Jharkhand, and Jammu and Kashmir also generated a wave of misinformation, with Uddhav Thackeray and Sanjay Raut among the primary targets.

The Bangladesh protests in July 2024 triggered a surge of communal disinformation, with Bangladeshi Muslims becoming the most targeted community. Manipulated videos and images, often taken out of context or sourced from entirely different events, were used to falsely depict attacks on Hindus and incite communal tensions. False narratives about former Prime Minister Sheikh Hasina also circulated widely. These instances of misinformation underscore the dangers of manipulating sensitive events to fuel pre-existing biases and incite violence.

BOOM’s analysis reveals the multifaceted nature of misinformation in 2024. The convergence of election cycles, social unrest, and the rapid advancement of AI technology created a fertile ground for the spread of false and misleading narratives. The analysis emphasizes the urgent need for continued fact-checking efforts, media literacy initiatives, and platform accountability to combat the growing threat of misinformation and protect the integrity of information ecosystems.

Share.
Exit mobile version