- Every USB-C accessory should be designed like this one - especially if you're traveling with it
- Finally, a Linux distro for power users with a refreshing approach to OS design
- Third of UK Supply Chain Relies on “Chinese Military” Companies
- Why AI productivity benefits require a PC refresh strategy
- TikTok rolls out a new Security Checkup tool. Here's how it works
UK’s Online Safety Act: Ofcom Can Now Issue Sanctions

Tech platforms operating in the UK can now be sanctioned for failing to remove illegal online content as required by the Online Safety Act.
The law, which was passed in October 2023 by the UK government, requires service providers, including social media firms, search engines, messaging, gaming and dating apps and pornography and file-sharing sites, to remove any harmful content deemed illegal from their platforms.
These contents encompass terror, hate, fraud, child sexual abuse and any content assisting or encouraging suicide.
When the legislation was adopted, a grace period was granted to affected companies. They had until March 16, 2025, to complete an assessment of the risk of illegal content appearing on their service.
In December 2024, Ofcom, the UK’s communications regulator, introduced guidance on what it expects to be included in the risk assessment, including risk profiles that can be used as the basis for the assessment.
Starting March 17, Ofcom will have the authority to sanction any in-scope entity that fails to comply with the regulations.
Penalties can reach up to £18m ($23.4m) or 10% of the company’s global revenue, whichever is higher.
In the most severe cases, the regulator may seek a court order to block access to the offending site within the UK.
Expert Insights on the Online Safety Act: Challenges and Opportunities
Complying with the Online Safety Act should not merely be a box-ticking exercise, argued Mark Jones, Partner at British law firm Payne Hicks Beach.
“Completing the risk assessment is not enough, tech companies need to set out how they will tackle illegal harms and proactively seek and remove such content,” he said.
“The new framework in the Online Safety Act and the Illegal Harms Codes requires tech companies to be proactive in identifying and removing illegal content and demonstrate their accountability.”
Jason Soroko, Senior Fellow at Sectigo, fears that the legislation could inadvertently harm smaller platforms, stifle innovation and strengthen online censorship.
“Implementation of the Online Safety Act faces hurdles in cost and technical feasibility. Platforms, especially smaller or independent operators, may struggle with the expense of robust age verification and content moderation tools. Such measures could force them out of the market or drive explicit content to unregulated spaces,” he said.
“Meanwhile, automated detection systems often fail to account for context, risking over-removal of legitimate content and triggering backlash over censorship. Lack of clarity around ‘harmful content’ further complicates compliance, as platforms may err on the side of overblocking to avoid penalties.”
Soroko concluded that the Act’s success will depend on pragmatic guidance, proportional enforcement and advancements in privacy-preserving verification methods.
According to Iona Silverman, a Partner at Freeths, another UK law firm, the Online Safety Act has significant potential to combat harmful online content.
Silverman expressed her support for the British government’s stance, stating, “I agree that the Online Safety Act is focused on tackling criminality, rather than censoring debate.”
However, she emphasized that for the Act to be effective, Ofcom, will need to adopt a robust approach to ensure that service providers, particularly the largest ones, make concrete commitments to remove harmful content.
Silverman noted that some of the biggest platforms covered by the Act have recently shown signs of potential non-compliance.
She cited Meta’s decision to discontinue its third-party fact-checking program in January, opting instead for a community-driven model.
Furthermore, Mark Zuckerberg openly acknowledged that changes to Meta’s content filtering system would likely result in the platform failing to detect and remove harmful content.