The Online Safety Act: Good Intentions, Dangerous Overreach
Nobody disagrees that children should be safer online. Nobody thinks platforms should be free to host illegal content without consequence. The political consensus around the Online Safety Act leaned on these truths to push through one of the most far-reaching pieces of internet regulation in the democratic world — and most people barely noticed.
What the Act Actually Does
The Online Safety Act 2023 gives Ofcom sweeping powers to regulate user-to-user services and search engines. On the surface, it imposes duties of care: platforms must take action against illegal content, protect children from harmful material, and give adult users more control over what they see.
Dig beneath the headlines and the scope is staggering. The Act covers virtually every website, app, or service where users can post content or interact with each other. That's not just social media giants — it's forums, community sites, open source project pages, small business review sections, and anything in between.
The Encryption Problem
Perhaps the most alarming provision is the power to require platforms to use "accredited technology" to scan content — even in end-to-end encrypted messaging. The government insists this doesn't amount to breaking encryption. Cryptographers universally disagree.
Client-side scanning — the only technical approach that could comply — means scanning messages on your device before they're encrypted. That's a backdoor by another name. Signal and WhatsApp have both indicated they'd rather leave the UK market than compromise their encryption. Apple abandoned its own client-side scanning plans in 2022 after security researchers demonstrated how easily such systems could be abused.
"You cannot build a system that scans for one type of content without building the infrastructure to scan for anything."
Ofcom has since indicated it won't immediately enforce these powers — but the legal authority remains on the books, ready to be activated at any time by any future government.
Small Platforms, Impossible Burden
As someone who runs a hosting platform, this is where the Act hits closest to home. The compliance requirements don't scale down for small operators. Risk assessments, content moderation systems, age verification mechanisms, reporting obligations — these all require significant resources.
A local scout group's forum, a small community wiki, a niche hobby site — these are all technically in scope. The Act creates a regulatory environment that favours large platforms with dedicated legal and compliance teams, and punishes or deters small, independent operators. It's the opposite of an open web.
- Risk assessments must be documented and regularly updated
- Illegal content must be proactively identified and removed
- Age assurance measures are expected for content harmful to children
- Ofcom can issue fines of up to £18 million or 10% of global revenue
- Senior managers face potential criminal liability
Age Verification: A Privacy Nightmare
The Act effectively mandates age verification for large swathes of the internet. The methods available — ID uploads, facial age estimation, credit card checks — all create new privacy risks and centralised databases of browsing habits. We're being asked to prove our age to read the internet, and the infrastructure required to do so is a surveillance system in all but name.
France, Germany, and Australia have all grappled with similar proposals and repeatedly found them unworkable without unacceptable privacy trade-offs. The UK has pressed ahead regardless.
The Chilling Effect
Beyond the technical provisions, the Act introduces the concept of content that is legal but harmful. While the final version pulled back from outright regulation of legal speech for adults, the framework is in place. Platforms must offer users tools to filter categories of legal content — effectively requiring platforms to classify and label speech at scale.
The broader message is clear: if you run a platform in the UK, you are responsible for everything your users say and do. That doesn't encourage open discourse. It encourages over-moderation, risk aversion, and ultimately, fewer spaces for people to speak freely.
A Global Precedent
The UK doesn't exist in a vacuum. Authoritarian governments around the world watch how democracies regulate the internet. If the UK successfully mandates backdoors into encryption, it provides cover for every regime that wants to do the same. If age verification becomes normalised, the infrastructure can be repurposed for identity-linked browsing in less democratic contexts.
We should be setting an example of how to protect people online without dismantling the architecture of privacy and free expression. Instead, we're handing a playbook to governments with far worse intentions.
What Should Have Happened
The problems the Online Safety Act claims to address are real. Children are exposed to genuinely harmful content. Illegal material circulates on major platforms. But the answer isn't a sweeping regulatory framework that treats every website operator as a potential criminal and every encrypted message as a potential threat.
Targeted enforcement against platforms that knowingly host illegal content. Properly funded law enforcement with the technical capability to investigate crimes. Digital literacy education in schools. Empowering parents with better tools rather than deputising every website as a de facto regulator.
The Online Safety Act is a sledgehammer where a scalpel was needed. It won't make children meaningfully safer — the platforms most likely to harm children are the least likely to comply. What it will do is make the open web smaller, encryption weaker, and the UK a harder place to build and run independent online services.
That's not safety. That's control.