Laws that undermine security

The Draft Investigatory Powers Bill combines some previous laws and makes more explicit what the (UK) Government security services can do. That’s a good thing.

However, there are two areas where I think the bill will make life worse for us (UK citizens):

  1. Undermining our security, for the general everyday person and UK businesses. It is not entirely clear from (trying) to read the bill what exactly is required, but expert witnesses think that it would require companies to create back-doors for the Government.
  2. Mass surveillance, the government want to keep a record of every website you’ve visited. Mass surveillance has a chilling effect on us all, and especially vulnerable groups. I’m not going to cover that here, but there are good reasons why normal people would not want their browsing data retained.

I wanted an understandable roundup of the security issues my friends and family would understand, so I’ve tried to write one.

Undermining our security

Encryption is binary, on or off. A system is either secure (to the best of anyone’s knowledge) or broken and the security has to be fixed. Getting security right is hard (see any of the major hacks in the last few years), it is an ongoing process of closing holes in your armour, and any hole can have devastating consequences for people and businesses.

The plain truth is that online security relies on cryptography, and cryptography depends on a set of mathematical relationships that cannot be subverted selectively.

If you put in a backdoor for one person, it is there for anyone.

Some of the top experts in the security world wrote a paper called “mandating insecurity by requiring government access to all data and communications“. The NSA tried to create a general back-door 20 years ago. It failed then and it would be monumentally harder to do now.

A former director of the NSA and others have said “We believe that the greater public good is a secure communications infrastructure protected by ubiquitous encryption at the device, server and enterprise level without building in means for government monitoring.” I agree.

This isn’t just theoretical, at work we recently had to upgrade our main firewall because a ‘State level’ hack put a backdoor in (the Juniper hack). Whoever put it in, the US and UK security services knew of it (or another like it) and did nothing except take advantage of it. That’s three years our firewall wasn’t doing its job properly. Once the back-door was known it took researchers just 6 hours to backwards engineer it and get the password. Criminals would be able to do the same thing.

The pernicious aspect of the current and proposed laws is that: It is in the security services interest to use the vulnerabilities, even at the cost of businesses and other parts of the Government being hacked.

There are hackers using malicious software to disrupt power supplies and other real-world impacts, possibly getting access via Office software. We need to improve the general level of security for everyone, as any ‘back door’ (intentional or accidental) is a method for access.

It is also worth noting that the bill makes investigating security issues much harder, as if you find a vulnerability in some software, you might go to jail for responsibly disclosing it if it turns out to be have been created by the Government. This is madness, security researchers have a hard enough time as it is, we need to be helping them not threatening them with prison.

If the law undermines security, it is doing us a disservice.

“But we need to catch terrorists” is generally the reply, but that is so flawed as to be laughable for anyone with a clue about how this stuff works.

Ineffective

Assume the bill passes (like a French one did many months before the Paris attacks) and the Government have the ability to require technology companies (in the UK) to provide access by back door access or by other means.

If you are plotting something bad you would:

  • Stop using UK based services. It is the internet, you have your choice of providers from all over the world.
  • Use an app that provides end-to-end encryption (not from the big names like Facebook/Google/Apple), where you share encryption keys directly with your fellow plotters.
  • Build your own app, for example “Al Qaeda has used its own encryption programs since 2007.”
  • If you are UK based use a VPN (widely used by business to allow employees to access corporate networks) so that your ISP can’t tell what sites you visit.
  • Hide in plain sight, such as when “al-Qaeda hid secret documents in a porn video“.

Encryption cannot be the problem.

The Government’s reply to a recent petition on encryption indicated they will not seek to ban encryption, but will ask main-steam services to enable access to their customers communications. That effectively bans main-stream providers from using full end-to-end encryption, and therefore means “privacy for the criminals, but not for ordinary, law abiding citizens.” as criminals can just avoid certain providers.

Terrorists have known about encryption since the ‘90s, and the Paris attacks didn’t even use encryption, and still weren’t detected. In general Jihadists are making their plans public, so encryption really can’t be the problem.

All of the recent terrorist attacks have been perpetrated by people known to the intelligence agencies, but there wasn’t the man power to keep tabs on these targets.

The former technical director of the NSA’s Analytic Services Office, William Binney said “Bulk data over-collection from internet and telephony networks undermines security and has consistently resulted in loss of life in my country and elsewhere, from the 9/11 attacks to date… because it inundates analysts with too much data. It is 99 per cent useless. Who wants to know everyone who has ever looked at Google or the BBC? We have known for decades that that swamps analysts.”

The evidence given by the Home Secretary fleshes out some of the scenarios that bulk collection have helped with, but when you also read the full evidence from Binney, they are orthogonal. It appears (but it’s hard to tell from the top-line nature of the evidence) that targeted collection would also fulfil the scenarios outlined, without overwhelming the investigation(s).

Another concern is that the mass surveillance data is used to identify terrorists algorithmically. You might not think that’s a problem, but it would inevitably suffer from the False Positive Paradox. When you have a very small proportion of actual targets (e.g. 2000 terrorists in a population of 66 million, 0.003%), you are more likely to identify non-targets than real-targets.

Thousands of people have been killed by drone strikes in Pakistan since 2004, and the top target identified by the NSA’s algorithm was a reporter who travelled to the region to interview people (see The NSA’s SKYNET program may be killing thousands of innocent people).

This bill will cause millions (or perhaps billions) of pounds to be spent to add more hay to the stack, rather than magnets for the needles.

What to do

To be clear I am all for targeted surveillance and hacking of specific people’s devices / software etc. It just shouldn’t harm everyone else in the process.

Target surveillance: The recommendation from William Binney (former NSA) is to target surveillance as it is captured: “use social networks as defined by metadata relationships and some additional rules to smartly select data from the tens of terabytes flowing by… This focused data collected around known targets plus potential developmental targets and represented a much smaller set of content for analysts to look through. This makes the content problem more manageable and optimizes the probability of analysts succeeding.”

Change the general dynamic from secretive-offense to open-defense.

So what are all these vulnerabilities doing in a secret stash of NSA code that was stolen in 2013? Assuming the Russians were the ones who did the stealing, how many US companies did they hack with these vulnerabilities?

Rather than undermining our security, the Government should do everything possible to make our citizens, businesses, infrastructure and the Government itself as secure as possible. The US/UK services should stop hoarding security vulnerabilities that affect us all and disclose them to the vendors so they get fixed.

The CIA was recently found to also be hoarding vulnerabilities, it even had a program to hoard known vulnerabilities that other states have used in order to misdirect blame. So they knew these issues were in active use by other people.

We know the same security holes get found by different people:

rediscovery happens far more often than previously estimated. Between 15% and 20% of all vulnerabilities in browsers have at least one duplicate. For data available on Android between 2015 and 2016, 22% of vulnerabilities are rediscovered at least once an average of 2 months after their original disclosure.

Security research should be encouraged, businesses should be educated. This is a good start in the US, and the NCSC is a good start in the UK. However, the US’s Vulnerabilities Equities Process should be a more transparent, more engaged process than they appear to have, and the UK should have a process for this.

Overall, the best security possible should be built into all our products and services by default, as policy. The Web Standards organisation the W3C supports the pervasive use of strong end-to-end encryption for web communications. Perhaps bad-guys will be able to communicate in secret (as they do now with and without encryption), but at least criminals from around the world won’t be able to snoop on and steal from us.

One approach to encourage security research would be to create a specific exception for it (like this request triggered by copyright issues impinging on research.)

I’d even go as far as creating a national (or international) research institute for solving security issues:

I think we need to have a larger-scale response to the problems of the Internet. It has been a tremendous boon to our society… but it’s got some problems that we’re not just going to guilt people out of… Get that data out there and try to respond to it. This is not the first time we’ve had problems in an important tech, and it won’t be the last time, but let’s actually work on it.

Enable better international sharing for national security services (and not just the US/UK/CA/Aus/NZ). The internet is borderless, terrorist/criminals can move around, why should the intelligence about terrorists be marooned in particular countries? That means “making it easier for foreign governments to get data when that access is justified and harder when it is not.”

Above all, don’t let terrorists win by creating a culture of fear.


Scrutinizing the draft Investigatory Powers Bill goes into a lot of detail on the issues raised by the committees that examined the evidence put forward, it is a good next step for a deep dive.

Update: The UK government starts testing the surveillance systems.

Other Governments

I’m going to compile any references I see to other governments discussing this issue here: