Meta, X Fact-Checking Policy Shifts Raise Human Rights Concerns, Warns Council of Europe
STRASBOURG – Recent adjustments to fact-checking policies by social media giants Meta and X (formerly Twitter) have sparked concerns about potential adverse implications for human rights, warned Michael O’Flaherty, the Council of Europe Commissioner for Human Rights. O’Flaherty emphasized the critical role of platforms in combating disinformation and stressed that retreating from fact-checking creates a vacuum where harmful narratives thrive, posing a significant threat to democracy. He urged Council of Europe member states to demonstrate leadership in enforcing legal standards that require internet intermediaries to mitigate the systemic risks of disinformation while upholding international human rights norms.
The core issue revolves around the delicate balance between curbing harmful speech and safeguarding freedom of expression. This long-standing challenge has intensified in the digital age, where misinformation spreads rapidly, often amplified by algorithms that prioritize engagement over accuracy. The situation becomes particularly precarious when harmful speech emanates from state actors or influential figures, magnifying the potential damage to democratic processes. O’Flaherty underscored that combating falsehoods and hate speech is not an act of censorship, but rather a fundamental commitment to protecting human rights.
The Commissioner cited the European Court of Human Rights, emphasizing that respect for individual dignity forms the bedrock of a democratic society. States, therefore, have a legitimate right to limit speech that promotes hatred and intolerance, provided such interventions are proportionate to the aim pursued. The International Covenant on Civil and Political Rights similarly prohibits advocacy of hatred that incites discrimination, hostility, or violence. These legal frameworks provide a basis for international human rights norms that guide governments and private companies in navigating the complex intersection of free speech and protection from harm.
O’Flaherty highlighted the established international legal standards for combating disinformation, emphasizing the principles of legality, necessity, and proportionality. Transparency and accountability are also paramount, ensuring that measures taken are justifiable and do not unduly restrict legitimate expression. He called on member states to reinforce these standards by demanding greater transparency from internet platforms regarding their content moderation practices, particularly concerning the use of algorithms. Simultaneously, state interventions must remain firmly grounded in human rights law to prevent overreach that could stifle legitimate discourse.
The Commissioner cautioned against the dangerous implications of platforms abdicating their responsibility to address disinformation. He argued that neglecting fact-checking creates an environment where harmful narratives proliferate, eroding public trust and undermining democratic institutions. He stressed the urgency of the situation, given the speed at which misinformation spreads online and the potential for it to incite violence, discrimination, and social unrest. O’Flaherty emphasized the need for a multi-stakeholder approach, urging states, platforms, and civil society to collaborate in upholding human rights and democratic principles.
The overarching objective is to strike a balance that protects freedom of expression while recognizing its inherent limitations. O’Flaherty called for ongoing dialogue and collaboration among all stakeholders to ensure that content moderation practices effectively combat disinformation without infringing on fundamental rights. He reiterated that transparency and accountability are crucial in safeguarding against both disinformation and excessive state control, emphasizing the shared responsibility of governments, platforms, and civil society in fostering a healthy online environment that respects human rights and promotes democratic values.