The UK’s Online Safety Act 2023: A Comprehensive Overview
The Online Safety Act 2023 marks a significant step in regulating online spaces, aiming to protect children and adults from a range of harms. This new legislation places substantial responsibilities on social media companies and search services, compelling them to prioritize user safety on their platforms. The Act’s scope encompasses a wide array of online services, from social media giants to dating apps and online forums, holding them accountable for tackling illegal content, protecting children from harmful material, and providing users with greater control over their online experiences. The Act’s jurisdiction extends beyond UK borders, capturing services with significant UK user bases or those targeting the UK market, ensuring a broader reach of protection.
A central pillar of the Act is the protection of children. Platforms are mandated to prevent children from accessing harmful and age-inappropriate content, including pornography, content promoting self-harm or suicide, and material encouraging eating disorders. Furthermore, services must implement age-appropriate experiences for children, enforcing age limits rigorously and using age assurance technologies where applicable. The legislation also empowers parents and children with accessible reporting mechanisms to address online issues effectively.
Beyond child safety, the Act addresses the online safety of adults. Larger platforms, categorized as Category 1 services, are obliged to offer users tools to control the content they encounter and the individuals they interact with. These tools include identity verification options, enabling users to limit interactions with unverified accounts, thereby combating online trolling and harassment. Additionally, these services must provide optional tools to filter legal but potentially harmful content, such as material related to suicide, self-harm, eating disorders, and hate speech, empowering users to curate their online environment.
The implementation of the Online Safety Act is a phased process, overseen by Ofcom, the independent regulator for online safety. Ofcom has published a roadmap outlining the implementation timeline, which includes developing codes of practice and guidance for online platforms. The initial phase focuses on illegal content, requiring platforms to risk assess and implement measures to combat illegal activities online. Subsequent phases address content harmful to children, with specific guidance on age assurance for accessing pornography and codes of practice for broader child safety measures. Further phases will define categories of services and corresponding duties, ensuring proportionate responsibilities based on platform size and potential for harm.
The Act introduces new criminal offences, strengthening the legal framework against online harms. These offences cover a wide range of harmful activities, including encouraging serious self-harm, cyberflashing, sending harmful false information, threatening communications, intimate image abuse, and epilepsy trolling. These offences target individuals perpetrating these acts, with convictions already recorded under the cyberflashing and threatening communications provisions.
Enforcement of the Online Safety Act rests with Ofcom, which has substantial powers to ensure compliance. Companies failing to meet their duties face significant fines, up to £18 million or 10% of global revenue, whichever is greater. Criminal action can be taken against senior managers who obstruct Ofcom’s information requests or fail to comply with enforcement notices related to child safety duties and child sexual abuse and exploitation. In extreme cases, Ofcom can request court orders to compel payment providers, advertisers, and internet service providers to sever ties with non-compliant platforms, effectively crippling their operations in the UK.
The Act addresses several specific online harms, including illegal content, content harmful to children, and harmful algorithms. Platforms must proactively tackle illegal content, implementing measures to prevent its appearance and swiftly removing it when flagged. The Act lists priority illegal content categories, ranging from child sexual abuse and terrorism to fraud and promoting suicide. For content harmful to children, the Act defines primary and priority categories, with stricter requirements for preventing children’s access to primary content, such as pornography and content promoting self-harm. The Act also mandates consideration of algorithmic impacts on user exposure to harmful content, requiring platforms to mitigate identified risks.
Furthermore, the Act recognizes the disproportionate impact of online harms on women and girls. It mandates robust measures against illegal content affecting women and girls, including harassment, stalking, and revenge pornography. Ofcom is required to consult with relevant commissioners to ensure the voices of women, girls, and victims are reflected in the codes of practice. The Act also tackles misinformation and disinformation, focusing on illegal content and content harmful to children. Category 1 services must also enforce their terms of service regarding prohibited misinformation and disinformation. Finally, the Act acknowledges the changing landscape of pornography consumption and has prompted a separate independent review to assess the current regulations and propose updated measures to ensure a fit-for-purpose framework.