Social Media Platforms Show Improvement in Blocking Election Disinformation, But Challenges Remain
In the lead-up to the 2024 US Presidential election, concerns about the spread of disinformation on social media platforms reached a fever pitch. A pre-election investigation conducted by independent researchers revealed vulnerabilities in the moderation systems of major platforms like TikTok and YouTube. This investigation involved submitting eight ads containing blatant election disinformation, including false claims about online voting and content promoting violence against election workers. To further test the platforms’ ability to detect manipulative tactics, the ads were written in "algospeak," substituting letters with numbers and symbols. The initial results were alarming: TikTok approved half of the disinformation-laden ads, while YouTube, despite also approving half, at least required personal identification for publication, presenting a higher barrier to entry for malicious actors.
Following the publication of these findings, TikTok acknowledged the breaches in their policies, attributing the approvals to errors and suggesting that the ads might have undergone further review. The platform pledged to utilize the findings to enhance their detection capabilities. To assess the veracity of these claims, the researchers resubmitted the identical eight ads to both YouTube and TikTok. This time, both platforms demonstrated a marked improvement. TikTok rejected all eight ads, including those initially approved, while YouTube suspended the researchers’ account under their suspicious payments policy and flagged the ads for containing unreliable claims or US election advertising, requiring further verification. Crucially, all ads were marked as "not eligible" and "under review," preventing their publication.
The positive shift in both platforms’ responses is encouraging, indicating that they are capable of refining their moderation practices when held accountable. However, it’s important to acknowledge the limited scope of this test. The resubmitted ads were identical to the original submissions, making it a relatively straightforward challenge for the platforms to rectify their previous errors. Nevertheless, the improved outcome underscores the vital role of independent scrutiny in holding social media platforms accountable. Journalists, academics, and NGOs must be allowed and encouraged to test these moderation systems rigorously, particularly in a climate where political figures threaten to suppress efforts to combat disinformation and platforms themselves restrict access to transparency tools, as exemplified by Meta’s closure of Crowdtangle.
The ability to access accurate and reliable election information is a cornerstone of a healthy democracy. The burden of fact-checking every piece of information, especially paid advertising, should not fall on individual citizens. Independent organizations must continue to hold social media platforms accountable, ensuring that their stated policies translate into effective real-world practices. While the results of this US-focused investigation offer a glimmer of hope, the global picture remains complex and concerning. Previous investigations have revealed that platforms like TikTok and YouTube have failed similar tests in other elections, particularly in countries like India and Ireland. This highlights the uneven and often inadequate allocation of resources for content moderation across different jurisdictions.
The fight against election disinformation requires a sustained and comprehensive approach. Platforms must prioritize consistent and robust content moderation across all regions, not just in response to public scrutiny or in high-stakes elections like the US Presidential race. Investing in advanced detection technologies, coupled with human review and context-specific understanding, is essential. Furthermore, promoting media literacy among users, empowering them to critically evaluate information and identify misleading content, is crucial. Collaboration between platforms, governments, and civil society organizations is also vital to establish shared standards and best practices for combating disinformation.
Ultimately, the responsibility for ensuring the integrity of online information, especially during critical periods like elections, rests with the platforms themselves. While independent investigations play a crucial role in highlighting vulnerabilities and driving improvements, the long-term solution lies in platforms proactively implementing and enforcing robust moderation policies globally. Only then can we hope to mitigate the harmful impact of disinformation on democratic processes worldwide.