TikTok’s Political Ad Policy Under Scrutiny After Election Disinformation Approval
Just weeks before the crucial 2024 U.S. presidential election, a bombshell report by the non-profit organization Global Witness has revealed a critical flaw in TikTok’s advertising policies. Despite an explicit ban on political ads since 2019, the popular social media platform reportedly approved several advertisements containing blatant election disinformation. This revelation raises serious concerns about the platform’s ability to effectively moderate content and prevent the spread of misinformation during a sensitive election period.
Global Witness conducted a sting operation, submitting a series of test ads laced with various types of election falsehoods, including outright lies about voting procedures and inflammatory content inciting violence and threats against election workers. To further test the robustness of content moderation systems, the organization employed "algospeak," a technique that replaces letters with numbers and symbols to evade text-based detection. Shockingly, TikTok approved four out of the eight submitted ads, demonstrating a significant vulnerability in their system. Although the ads never went live as Global Witness withdrew them pre-publication, the incident highlights a worrying potential loophole for malicious actors to exploit.
In response to the report, TikTok spokesperson Ben Rathe acknowledged the error, stating that the ads were incorrectly approved during the initial moderation stage but did not run on the platform. He reiterated TikTok’s commitment to enforcing its ban on political advertising. However, this incident casts doubt on the platform’s ability to consistently enforce its policies, especially given the high stakes of a presidential election. The ease with which disinformation could potentially bypass their safeguards underscores the urgent need for more robust and reliable content moderation mechanisms.
Comparatively, Facebook, now under the umbrella of Meta Platforms Inc., fared significantly better in the Global Witness test. Only one out of the eight deceptive ads was approved, suggesting an improvement in their content moderation efforts since a similar investigation two years prior. Meta acknowledged the report’s limited scope but emphasized their ongoing commitment to evaluating and improving their enforcement efforts. While this suggests a step in the right direction, the fact that any disinformation slipped through highlights the persistent challenge social media companies face in combating the spread of false information.
Google’s YouTube demonstrated the most effective approach, according to the Global Witness study. Although initially approving four of the test ads, YouTube’s system prevented them from going live. The platform requested additional identification from the Global Witness testers before publication and ultimately paused their account when the information wasn’t provided. While this prevented the spread of the disinformation, the report also noted that it remains unclear whether the ads would have eventually been published had the required identification been provided. This ambiguity raises questions about the potential for bad actors to circumvent these verification steps.
The incident with TikTok underscores a broader challenge: the seemingly relentless cat-and-mouse game between social media platforms and those seeking to exploit them for spreading disinformation. While companies typically have more stringent policies for paid advertisements than for organic user posts, the Global Witness test reveals that even paid content can slip through the cracks. The variety of tactics used in the fake ads, from outright lies about voting procedures to subtle voter suppression tactics and even incitements to violence, illustrates the multifaceted nature of the threat. The use of algospeak further complicates the issue, highlighting the constant need for adaptation and innovation in content moderation systems. This incident serves as a critical reminder for social media platforms to continuously refine their processes, invest in more sophisticated technology, and prioritize user safety, especially during crucial democratic processes like elections. The integrity of elections relies on access to accurate and reliable information, and social media platforms bear a significant responsibility in preventing the spread of harmful disinformation that could undermine democratic processes.