Facebook’s Election Disinformation Moderation: A Mixed Bag of Progress and Persistent Concerns
The specter of election disinformation continues to loom large over social media platforms, particularly as key elections approach around the globe. Following the tumultuous events of January 6th, 2021, Facebook, now under the Meta umbrella, pledged to curb the spread of political content on its platform and enhance its efforts to combat misinformation. However, recent investigations reveal a complex and inconsistent picture of progress and ongoing challenges in Meta’s fight against election disinformation.
In the lead-up to the 2023 US elections, Meta has seemingly taken a step back from direct engagement with political issues. CEO Mark Zuckerberg has reportedly delegated responsibilities for election-related matters, while the dedicated election integrity team has been disbanded. Coupled with the closure of Crowdtangle, a valuable tool for researchers tracking viral content, these moves raise concerns about transparency and Meta’s commitment to tackling election interference.
A recent investigation by Global Witness sought to assess the effectiveness of Meta’s current moderation practices. By submitting a series of deliberately misleading election ads, the investigation aimed to test Facebook’s ability to identify and reject content that violated its stated policies. The results showed a marked improvement compared to previous investigations conducted in 2022. While earlier tests revealed an acceptance rate of 20-30% for disinformation ads, this time, Facebook rejected seven out of eight submissions.
The single ad approved by Facebook falsely claimed that a valid driver’s license was a requirement for voting. This blatant misinformation clearly contradicts Meta’s policies, which explicitly prohibit false information about voting procedures, eligibility, and candidate status. Despite the improved rejection rate, the approval of even one misleading ad highlights significant vulnerabilities in Facebook’s moderation system. The easily detectable nature of the disinformation used in the test underscores the need for more robust mechanisms to identify and filter out subtle and sophisticated forms of misinformation.
While the improved performance in the US context offers a glimmer of hope, the global picture remains far from reassuring. A previous Global Witness investigation in Brazil revealed a shocking 100% acceptance rate for election disinformation ads. This stark contrast underscores the inconsistency and inadequacy of Meta’s international enforcement efforts. Even after being alerted to these findings, a follow-up test in Brazil showed that while Meta’s systems had improved, they still approved half of the resubmitted disinformation ads.
The discrepancy between Meta’s moderation performance in the US and other regions raises serious questions about the prioritization of resources and the effectiveness of their global content moderation strategy. The uneven application of policies suggests a potential disparity in the allocation of resources and training for content moderators across different regions. This disparity can have detrimental consequences for electoral integrity in countries where access to accurate information is crucial for informed democratic participation.
The lack of a timely response from Meta regarding these findings further compounds concerns about the company’s commitment to transparency and accountability. The absence of an official statement leaves crucial questions unanswered and fuels speculation about the company’s willingness to address these critical issues. Meaningful progress requires not only improved detection and removal of disinformation but also open communication and engagement with researchers and civil society organizations working to safeguard democratic processes. Meta’s silence in the face of these findings underscores the urgent need for greater transparency and accountability in its content moderation practices. The company must demonstrate a genuine commitment to addressing these vulnerabilities and ensuring that its platform is not used to undermine democratic elections worldwide.