The Rise of AI-Powered Disinformation: A 2024 Retrospective

The year 2024 marked a significant escalation in the use of artificial intelligence for malicious purposes, particularly in the dissemination of disinformation. AI-generated content, ranging from deepfake videos to manipulated images and audio, flooded online platforms, blurring the lines between reality and fabrication. This technologically advanced form of deception served as a potent weapon for sowing discord, manipulating public opinion, and perpetrating scams. A Rappler investigation revealed that 12% of the fact-checks conducted throughout the year involved AI-generated or manipulated content, with a significant portion focusing on health-related misinformation. This surge in AI-driven disinformation underscores the growing need for robust detection and countermeasure strategies.

Targets of AI Disinformation: From Public Figures to Ordinary Citizens

The reach of AI-driven disinformation extended across a broad spectrum of individuals, from high-profile figures to everyday citizens. Public figures, including news anchors, medical professionals, celebrities, politicians, athletes, and religious leaders, became prime targets. Disinformation networks employed sophisticated tactics, including mimicking credible news outlets to bolster the believability of their fabricated content. AI-generated news segments featuring fabricated interviews with health practitioners endorsing products became a common tactic. Dr. Willie Ong, a senatorial candidate, and his wife, Dr. Liza Ong, both prominent medical figures, were repeatedly targeted with AI-generated materials falsely attributing endorsements of health products to them. The widespread targeting of these individuals highlights the indiscriminate nature of AI-driven disinformation campaigns.

Journalists, Politicians, and Celebrities in the Crosshairs

The pervasive nature of AI-driven disinformation also ensnared prominent journalists, with news anchors such as Jessica Soho and Mel Tiangco of GMA Network, and Karen Davila of ABS-CBN, frequently targeted. Rappler CEO Maria Ressa herself fell victim to a deepfake video falsely portraying her endorsing Bitcoin, an incident traced back to a Russian scam network operating in the Philippines. Even President Ferdinand Marcos Jr., despite benefiting from disinformation campaigns in the past, was not immune, with a deepfake video circulating online depicting him using illegal drugs during his State of the Nation Address. The targeting of individuals across the political spectrum underscores the potential of AI-driven disinformation to destabilize trust in institutions and individuals.

The Challenge of Combating AI-Generated Disinformation

The fight against AI-driven disinformation presents significant challenges for fact-checkers and journalists. Existing tools for detecting AI-generated content are often expensive and lack the precision needed for definitive identification. Fact-checkers frequently rely on time-consuming manual methods, tracing the origins of suspicious content and seeking clarification from affected individuals and institutions. This resource-intensive process underscores the urgent need for more effective and accessible tools to combat the rapid spread of AI-generated disinformation. The difficulty in combating this form of manipulation was aptly described by Jency Jacob, managing editor of Indian fact-checking organization BOOM, as “fighting tanks with sticks and stones.”

The Looming Threat to the 2025 Philippine Midterm Elections

With the 2025 Philippine midterm elections on the horizon, media practitioners anticipate a surge in AI-driven disinformation. The Commission on Elections (Comelec) has responded by issuing guidelines on social media usage, requiring candidates to disclose their official accounts and any utilization of AI in campaign materials. The Comelec has also established a task force, "Katotohanan, Katapatan at Katarungan sa Halalan," aimed at countering AI-driven disinformation across various media platforms. However, challenges remain, with existing laws, such as the 1985 Omnibus Election Code and the 2001 Fair Election Act, proving inadequate to address the complexities of social media and AI-driven manipulation in the modern political landscape.

Legal Loopholes and the Future of Philippine Elections

Comelec Chairperson George Erwin Garcia acknowledges the limitations of current legislation in effectively regulating the use of AI in political campaigns. While candidates face potential disqualification and legal repercussions for spreading disinformation, holding supporters accountable remains a significant challenge. The influence of social media in past elections, from President Duterte’s 2016 victory to President Marcos Jr.’s 2022 win, highlights the potential for AI-powered disinformation to sway public opinion and election outcomes. As the 2025 midterm elections approach, the question remains: how will the proliferation of AI-driven disinformation shape the political landscape and influence the democratic process in the Philippines? The evolving nature of this technology demands continuous vigilance and innovative strategies to safeguard the integrity of elections and public discourse.

Share.
Exit mobile version