I am so sick of this.

I am sick of being told – implicitly and explicitly – that I’m overreacting. I am sick of watching people older than Millennials shrug their shoulders as the internet is slowly reshaped into something recognisably authoritarian, as long as it’s done in the name of "protecting the children."

I am sick of being told to "go with the flow", to "stop worrying", to "just go offline", as if the internet were a novelty rather than the backbone of modern life.

Because what we are living through right now is not accidental. It is not neutral. And it is not harmless.

It is a deliberate shift in how power is exercised online, and it is being normalised in real time.

“Protect the Children” as a Political Weapon

Let's start with the phrase doing most of the work here: protect the children.

On the surface, it sounds very reasonable. Who could possibly object to protecting children online? That's the point. It’s a rhetorical shield. Once those words are invoked, normal scrutiny evaporates. Any objection can be framed as suspicious, selfish, or immoral. The Overton window shifts just enough that measures which would otherwise be considered invasive suddenly become "reasonable."

This is not new. Moral panics have always been used to justify expansions of control. What's different now is the scale and permanence of the infrastructure being built.

Age-gating. Identity verification. Mandatory filtering. Surveillance-friendly moderation systems. Weakening encryption. All of it is justified with the same argument: this is necessary for safety. And yet, somehow, the actual harms being addressed remain vague, while the harms being introduced are concrete, structural, and long-lasting.

The Apple Encryption Case Was a Line in the Sand

When the UK government demanded that Apple weaken its end-to-end encryption, this should have been a watershed moment. Not because Apple is perfect or benevolent, but because the technical implications were unambiguous.

Apple’s response – to remove Advanced Data Protection entirely rather than comply – is quietly damning. It says: what you are asking for cannot coexist with real security. And they're right.

You cannot have end-to-end encryption with a backdoor. That is not a matter of opinion. That is how cryptography works.

The idea that you can create a "lawful access" mechanism that only the "good guys" can use is fantasy. Keys leak. Implementations are reverse-engineered. Vulnerabilities are discovered. Insider threats exist. Future governments exist. You don’t get to design systems around today's intentions and ignore tomorrow’s incentives.

If you deliberately weaken encryption, you are not balancing privacy and safety. You are choosing surveillance and hoping nothing goes wrong.

For anyone who wants the specifics, the reporting on this was clear and unambiguous. That alone should tell you how incompatible the demand was with basic security principles.

"Nothing to Hide" Is Not the Point – and Never Was

When I pushed back on this, I got the most infuriating response imaginable: "I don't have anything to hide."

That sentence needs to die.

Privacy is not about hiding wrongdoing. It is about autonomy. It is about power. It is about who gets to collect, analyse, store, and interpret information about you. Saying "I have nothing to hide" assumes a static world with a permanently benevolent authority that will always act proportionately, competently, and in good faith.

That world has never existed.

From a technical perspective, the argument is even worse. Security systems do not enforce morality. They enforce capability. Once the capability to access private data exists, it will be abused eventually. Not because everyone is evil, but because complex systems fail in predictable ways.

Weakening encryption expands the attack surface. It introduces new threat vectors. It guarantees future compromise. This is not paranoia – it is risk analysis.

Privacy Law for Everyone Except the Government

What makes all of this even more insulting is the simultaneous insistence that the UK takes privacy seriously.

GDPR. The Data Protection Act. Data minimisation. Purpose limitation. Confidentiality. These are held up as proof that we care about digital rights.

Until the government wants access.

Then, suddenly, privacy becomes conditional. Important, but not that important. Worth defending in principle, but dispensable in practice.

If a private company built a system that allowed third-party access to private communications "just in case", it would rightly be considered a massive violation of privacy law. When the government demands effectively the same thing, it's reframed as necessary oversight.

That isn’t a legal distinction. It's a power distinction.

"Just Go Offline" Is an Admission of Failure

At some point, the conversation always ends with the same suggestion: just go offline.

As if that's an option.

I rely on the internet for education, development work, communication, collaboration, and basic participation in society. I grew up online. My generation didn't adopt the internet – it was already there. Telling someone like me to go offline to avoid surveillance is like telling them to stop existing in public.

It's not advice. It’s an admission that the system is broken and no one wants to fix it.

Good security enables participation without disproportionate risk. It does not demand withdrawal.

The Internet I Actually Use

What’s especially frustrating is how disconnected these policies are from how the modern internet actually functions.

I use – and develop with – the AT Protocol. It's a decentralised, cryptographically verifiable system designed specifically to avoid single points of control. Identity, content, and trust are not owned by one company or one state. They're distributed, auditable, and portable. For more context, this blog is on the protocol.

In other words, it's an attempt to build the internet better.

And yet, while developers are working to reduce centralisation and abuse, governments are pushing policy in the opposite direction: towards mandatory identity, centralised databases, and architectures that assume surveillance as a default.

The contrast could not be clearer.

Age-Gating and the Surveillance Pipeline

Age verification is the perfect example of how this goes wrong.

In theory, it’s about protecting minors. In practice, it almost always means collecting identity data, storing it centrally, and linking it to online behaviour. From a security perspective, this is catastrophic risk concentration. You are creating honeypots of highly sensitive data and pretending that won't end badly.

We already know how this story ends. Databases leak. Regularly. Spectacularly.

And the hypocrisy doesn't stop there. The moment someone turns 18, they are suddenly bombarded with gambling, alcohol, cryptocurrency, and adult advertising. So clearly, this isn’t really about harm reduction. It’s about categorisation and monetisation.

"Protect the children" becomes the justification for building infrastructure that surveils everyone.

Along with that, why now? Why wait 30 or so years to take catastrophically drastic action? I was already slapped with adverts, and no one cared but now they care? What a load of shite.

Orwell Was Not a Manual

Anyone who has read Nineteen Eighty-Four should find this trajectory deeply uncomfortable.

Not because we've suddenly arrived at totalitarianism overnight, but because the mechanisms are familiar. Constant justification. Moral framing. Gradual erosion of private space. Surveillance reframed as care.

The most chilling part of Orwell's world wasn't the technology. It was the normalisation. People adapted. They rationalised. They stopped questioning.

That is the real danger here.

Not a single law. Not a single demand. But the slow acceptance of the idea that privacy is suspicious, anonymity is dangerous, and surveillance is the default state of existence.

This Is Not About Hiding

So no. This is not about having something to hide.

It's about understanding how systems fail. It's about recognising that weakening security controls harms everyone, not just the people you're afraid of. It's about refusing to accept that the only way to be safe online is to be watched, identified, and categorised at all times.

I am not going to "go with the flow." And I am not going to "go offline."

I am going to keep building, keep learning, keep using systems that respect user autonomy, and keep calling this what it is: a deeply unserious approach to security, dressed up as morality.

The world decided to become much more privacy-hostile when it was my turn to be an adult. Utter bollocks.