Meta Platforms, the parent company of Facebook, Instagram, and Threads, has released a report addressing concerns about the impact of artificial intelligence (AI) on misinformation during the 2024 global elections. Amid growing fears that AI-generated content has the potential to disrupt democratic processes, Meta asserts that less than one percent of election-related misinformation on its platforms stemmed from AI sources. In a blog post by Nick Clegg, Meta’s president of Global Affairs, the company provides insights into how its platforms have been utilized for communication during this critical election year, which saw significant voting activity not only in the United States but also in countries like India, Indonesia, Mexico, and several EU nations.
Reflecting on their evolving strategies since the 2016 elections, Clegg stated that Meta has established a dedicated team focused on election integrity. This team is composed of experts from diverse fields, including intelligence, data science, and legal affairs. During the 2024 elections, Meta operated election operations centers worldwide to effectively monitor and respond to misinformation and other electoral issues. Clegg emphasizes the importance of learning from past experiences while highlighting the delicate balance between safeguarding free expression and ensuring user safety—a challenge that has been particularly pronounced during past electoral cycles.
Throughout the U.S. election period, Meta implemented reminders for users on Facebook and Instagram that garnered over one billion impressions, focusing on topics such as voter registration and voting methods. Even though AI-generated content was a major point of concern, Meta reported that its prevalence was minimal compared to the total misinformation identified. Clegg noted that while some AI-generated content gained attention, the overall impact was modest and limited, suggesting that existing policies have been effective in mitigating major risks associated with generative AI.
Despite concerns, Meta’s evaluations have shown that confirmed or suspected instances of AI misuse during the elections remained low. During the significant elections monitored, AI content accounted for less than one percent of all fact-checked misinformation related to elections and political issues. While Clegg did not delve deeply into the potential reach of AI-generated misinformation, he maintained that the impact was small when considered within the broader context of misinformation across Meta’s platforms.
Meta’s commitment to tackling deceptive AI content is further demonstrated by its rejection of nearly 600,000 requests to create images of political candidates, including President Joe Biden, using its own generative AI tool named Imagine. Earlier this year, Meta joined the AI Elections Accord, pledging to work collaboratively to prevent AI misuse during elections worldwide. This proactive approach highlights Meta’s awareness of the challenges posed by AI technologies in political contexts and its intention to remain part of the solution.
On the issue of foreign interference, Clegg disclosed that in 2024, Meta’s teams dismantled approximately 20 covert influence operations globally, spanning regions such as the Middle East, Europe, and the United States. As persistent threats to election integrity become more complex and widespread, Clegg underscored the necessity of continually refining strategies to address these challenges. Ultimately, Meta aims to foster an environment in which users can express themselves freely while ensuring that security measures evolve in congruence with emerging threats in the digital landscape.