Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

US Embassy dismisses fake reports about urging citizens to leave Azerbaijan

July 5, 2025

AI-Generated Red Deer Weather Incident Hoax Goes Viral – A New Age of Fake News?

July 5, 2025

UN climate expert urges criminalization of fossil fuel disinformation to protect basic human rights

July 5, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»United Kingdom
United Kingdom

An Explanation of the Online Safety Act

News RoomBy News RoomJanuary 19, 20254 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

The UK’s Online Safety Act 2023: A Comprehensive Overview

The Online Safety Act 2023 marks a significant step in regulating online spaces, aiming to protect children and adults from a range of harms. This new legislation places substantial responsibilities on social media companies and search services, compelling them to prioritize user safety on their platforms. The Act’s scope encompasses a wide array of online services, from social media giants to dating apps and online forums, holding them accountable for tackling illegal content, protecting children from harmful material, and providing users with greater control over their online experiences. The Act’s jurisdiction extends beyond UK borders, capturing services with significant UK user bases or those targeting the UK market, ensuring a broader reach of protection.

A central pillar of the Act is the protection of children. Platforms are mandated to prevent children from accessing harmful and age-inappropriate content, including pornography, content promoting self-harm or suicide, and material encouraging eating disorders. Furthermore, services must implement age-appropriate experiences for children, enforcing age limits rigorously and using age assurance technologies where applicable. The legislation also empowers parents and children with accessible reporting mechanisms to address online issues effectively.

Beyond child safety, the Act addresses the online safety of adults. Larger platforms, categorized as Category 1 services, are obliged to offer users tools to control the content they encounter and the individuals they interact with. These tools include identity verification options, enabling users to limit interactions with unverified accounts, thereby combating online trolling and harassment. Additionally, these services must provide optional tools to filter legal but potentially harmful content, such as material related to suicide, self-harm, eating disorders, and hate speech, empowering users to curate their online environment.

The implementation of the Online Safety Act is a phased process, overseen by Ofcom, the independent regulator for online safety. Ofcom has published a roadmap outlining the implementation timeline, which includes developing codes of practice and guidance for online platforms. The initial phase focuses on illegal content, requiring platforms to risk assess and implement measures to combat illegal activities online. Subsequent phases address content harmful to children, with specific guidance on age assurance for accessing pornography and codes of practice for broader child safety measures. Further phases will define categories of services and corresponding duties, ensuring proportionate responsibilities based on platform size and potential for harm.

The Act introduces new criminal offences, strengthening the legal framework against online harms. These offences cover a wide range of harmful activities, including encouraging serious self-harm, cyberflashing, sending harmful false information, threatening communications, intimate image abuse, and epilepsy trolling. These offences target individuals perpetrating these acts, with convictions already recorded under the cyberflashing and threatening communications provisions.

Enforcement of the Online Safety Act rests with Ofcom, which has substantial powers to ensure compliance. Companies failing to meet their duties face significant fines, up to £18 million or 10% of global revenue, whichever is greater. Criminal action can be taken against senior managers who obstruct Ofcom’s information requests or fail to comply with enforcement notices related to child safety duties and child sexual abuse and exploitation. In extreme cases, Ofcom can request court orders to compel payment providers, advertisers, and internet service providers to sever ties with non-compliant platforms, effectively crippling their operations in the UK.

The Act addresses several specific online harms, including illegal content, content harmful to children, and harmful algorithms. Platforms must proactively tackle illegal content, implementing measures to prevent its appearance and swiftly removing it when flagged. The Act lists priority illegal content categories, ranging from child sexual abuse and terrorism to fraud and promoting suicide. For content harmful to children, the Act defines primary and priority categories, with stricter requirements for preventing children’s access to primary content, such as pornography and content promoting self-harm. The Act also mandates consideration of algorithmic impacts on user exposure to harmful content, requiring platforms to mitigate identified risks.

Furthermore, the Act recognizes the disproportionate impact of online harms on women and girls. It mandates robust measures against illegal content affecting women and girls, including harassment, stalking, and revenge pornography. Ofcom is required to consult with relevant commissioners to ensure the voices of women, girls, and victims are reflected in the codes of practice. The Act also tackles misinformation and disinformation, focusing on illegal content and content harmful to children. Category 1 services must also enforce their terms of service regarding prohibited misinformation and disinformation. Finally, the Act acknowledges the changing landscape of pornography consumption and has prompted a separate independent review to assess the current regulations and propose updated measures to ensure a fit-for-purpose framework.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

What impact do climate change misinformation and disinformation have? HTML

RFK Jr: Fact-checking his views on health policy

Council chiefs warn of ‘corrosive impact’ of fake news

Understanding toxic misinformation to stop the spread

UK: Far-right riots allegedly fuelled by misinformation spread on X, Telegram, & Meta

New inquiry: Disinformation diplomacy: How malign actors are seeking to undermine democracy – Committees

Editors Picks

AI-Generated Red Deer Weather Incident Hoax Goes Viral – A New Age of Fake News?

July 5, 2025

UN climate expert urges criminalization of fossil fuel disinformation to protect basic human rights

July 5, 2025

Can AI chatbots easily be misused to spread credible health misinformation?

July 5, 2025

False Reports About Mosque Conversion of Ani Cathedral

July 5, 2025

‘Blatant misinformation’: Social Security Administration email praising Trump’s tax bill blasted as a ‘lie’ | US social security

July 5, 2025

Latest Articles

Udupi: Man Arrested for Allegedly Raping Woman Under False Promise of Marriage

July 5, 2025

Misinformation On Operation Sindoor, 2025 Bihar Elections & More

July 5, 2025

Young mother-of-two shares one piece of misinformation everyone needs to know about killer disease – after ‘piles’ turned out to be stage 3 bowel cancer

July 5, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.