The Computer Misuse Act (CMA) turns 30 today. Critics say it has far outlived its purpose, with its section 1 blanket-criminalising security researchers and undermining the ability for security teams to conduct threat scanning. That, in turn, is putting businesses at greater risk of attack, they warn.
Now, an eclectic coalition spanning members from across the UK’s multi-billion-pound tech sector, including businesses, think tanks and industry consortia, has written to Prime Minister Boris Johnson urging him to reform the legislation – warning that it is no longer fit for purpose in today’s world.
Signatories to the letter include industry group techUK, security firms F-Secure, NCC, Digital Shadows, international accreditation body CREST, the think tank Demos, and several prominent lawyers. (Their letter today builds on a report urging CMA reform that they published in January 2020).
Computer Misuse Act at 30: Old before its time?
The Computer Misuse Act (1990) was written to “prevent computer hacking before the concept of cyber security existed”, they say (just 0.5% of the population used the Internet when the Act was given Royal Assent).
The campaigners warn that restrictions in the legislation deter “a large proportion of the research [needed to] assess and defend against emerging threats posed by organised criminals and geo-political actors”.
The 1990 legislation begins with a blanket warning:
(1) A person is guilty of an offence if – a) he causes a computer to perform any function with intent to secure access to any program or data held in any computer; b) the access he intends to secure is unauthorised.
Yet in today’s heavily networked world, security researchers frequently probe ‘computers’ and networks of third parties to assess their vulnerability: there is a whole cottage industry of bug ‘bounty hunters’, and a grey market of so-called 0day (previously unseen vulnerability) sellers dedicated to precisely this.
As Ollie Whitehouse, Global CTO, NCC Group, tells Computer Business Review: “[The CMA] criminalises any access to a computer system without permission of the system owner. Threat intelligence and security researchers, by the very nature of the work they are undertaking, are often unable to obtain that permission: a threat intelligence researcher investigating a cybercriminal’s attack infrastructure will be hard pressed to obtain that criminal’s consent to try and catch them. [The law] completely ignores the fact that there are ethical researchers undertaking research activities in good faith.”
That’s just section 1. Section 3, meanwhile, targets anyone who “makes, adapts, supplies or offers to supply any article intending it to be used to commit, or to assist in the commission of, an offence under section 1″.
As the January 2020 report on CMA urging reform notes: “The aim of section 3A was to find an additional means of punishing hostile attackers by looking at the tools that they use. The main problem in drafting the legislation was that code and tools used by hackers are either identical to or very similar to code and tools used legitimately by computer and network systems administrators and by penetration testers.”
NCC Group’s Whitehouse adds: “The law needs to be changed to allow for actors’ motivations to be taken into account when judging their actions. The way to do this, we believe, is to include statutory defences in a reformed CMA that legitimise activities otherwise illegal under section 1 where they happen in order to detect and prevent (cyber) crime.
“There are legal precedents, including in the Data Protection Act 2018, so this isn’t a novel concept. But it would extend legal certainties and protections guaranteed to others to the UK’s cyber defenders.”
The campaign aims to build on earlier work by the Criminal Law Reform Now Network (CLRNN) on the same subject. The CLRNN report on 22 January notes that it is strikingly difficult to get precise numbers on CMA prosecutions, but puts it at approximately 500 since 1990. Campaigners say that despite the comparatively low prosecution figures, the deterrent factor of the legislation – which is well known in the security community – remains deeply damaging.
They noted in the January report that, under current law, “only law enforcement and the NCSC (National Cyber Security Centre), which is part of GCHQ and inherits its powers under section 10 of the CMA 1990, Part 5 of the Investigatory Powers Act 2016 and section 3 [of the] Intelligence Services Act 1994, appear to be the only UK bodies that can carry out threat intelligence beyond a corporate boundary”.
MD at F-Secure Consulting Ed Parsons says: “We also need to protect security professionals involved in research on common technologies targeted by cybercriminals looking to launch indiscriminate attacks at scale.”
He adds: “The CMA in its current form doesn’t provide an effective defence for cybersecurity professionals acting in good faith, whether involved in technical research, incident response or threat intelligence. It limits what the UK computing industry can do compared with foreign competitors, including our ability to provide support to national security and law enforcement authorities through proportionate investigation of attacker infrastructure.”
The ‘CyberUp’ campaign, which has published today’s letter, is proposing a raft of changes to the legislation, including “exploring options to create a regime of approval and accreditation of eligible providers, signing of an individually applicable strict ethics code of conduct, a commitment to maintain and share auditable logs of all activities, and an obligation to pass on all intelligence and information to the appropriate authorities.”
Meanwhile, the campaign warns, the kind of research that could result in more robust security across organisations is being stymied. The online world has changed immeasurably since 1990: it is time the legislation caught up.
Global Construction Outlook to 2024 (COVID-19 Impact)
Our parent business intelligence company