The Online Safety Act of Great Britain, introduced in 2022, aimed to protect citizens from illegal or harmful content online at the time. However, by October 2023, the law required social media platforms to implement specific measures to protect users from illegal activities and online activity_hashes. Despite these provisions, the report highlights a major oversight in the Act: it fails to address the amplification of “legal but harmful content,” leading to a crisis that prompted callouts for further regulation.
According to the report, the Online Safety Act received royal assent in October 2023, and aspects of the law wereERA’d in March this year. The committee argued that the Act should not only protect users from harmful activity but also hold social media platforms accountable for the way they amplify misleading or deceptive content. The report’s findings underscore the growing threat of misinformation and hateful content on social media, which can spread illegally and harm individuals.combined with the rise of fake identities and ideologies, this could result in further societal destabilization.
In 2024, the UK experienced significant unrest following the tragic fatal stabbing of a child in Southport, following which numerous posts on social media online-arised online-aware of the attack. These posts Acknowledged the attackers’ assumptions of identity online, which was amplified by the algorithmic platforms used by influencers and opinion leaders. Influencers heavily rely on Twitter and TikTok, whose algorithms promote content based on its feasibility and reach. The report revealed that between 29 July and 9 August 2024, false or unfounded claims about the attacker achieved 155 million impressions on Twitter. This rapid amplification highlights the interconnectedness of misinformation on social media and its potential to cause widely shared and harmful content online.
The report’s findings align with growing concerns among policymakers and citizens about the potential misuse of technology in spreading information. The committee’s recommendations emphasized the need for social media companies to be held accountable for the way they amplify misleading or deceptive content. They also emphasized the importance of establishing a constitutional basis for regulation by rooting it in practical sphere principles. The report notes that the Bill Online Safety Act must address the pervasive spread of misinformation that is harmful but not illegal.
The report’s findings call for a multifaceted response, with the aim of creating stronger online safety regimes. The UK government needs to adopt five critical principles as the foundation of future regulation: a commitment to administering technology in accordance with law, prioritizing freedom of expression, ensuring accountability for platforms mantained in line with core values, enhancing user control through input and engagement, and ensuring compliance with a diverse range of risk assessments to protect both public and private users. The report calls for the UK government to uphold these principles while addressing the growing issue of misinformation online.