Computer Misuse Act Conundrums

Written by

The Computer Misuse Act turns 30 next year, with some in the industry calling for revisions to be made to bring it up to speed with modern security research practices. However, as Dan Raywood finds out, this has not been embraced by all.

It was back in the mid-1980s when Robert Schifreen and Steve Gold, Infosecurity’s former contributing editor, managed to access BT’s Prestel interactive viewdata service. Having accessed the personal message box of the Duke of Edinburgh, after Schifreen was able to note the simple username and password combination having shoulder surfed at a trade show, they were charged under Section One of the Forgery and Counterfeiting Act 1981 and accused of “defrauding BT by manufacturing a false instrument.” 

At the time, the conviction was appealed, as the Forgery and Counterfeiting Act was considered not applicable for the actions of Schifreen and Gold. The acquittal was upheld by the House of Lords, where Lord Justice Brandon argued that the language of the Act was “not designed to fit” and “produced grave difficulties for both judge and jury which we would not wish to see repeated.”

Lord Justice Brandon said that the access amounted to “dishonestly gaining access to the relevant Prestel data bank by a trick,” which was not a criminal offence. Two years after the initial demonstration by Schifreen and Gold, the Computer Misuse Act 1990 (CMA) was introduced. It was described as “an Act to make provision for securing computer material against unauthorized access or modification; and for connected purposes.” It set out that a person is deemed guilty of an offence if: they cause a computer to perform any function with intent to secure access to any program or data held in any computer, the access they intend to secure is unauthorized and if they know, at the time of causing the computer to perform the function, that that is the case.

Also, the Act stated that a person is guilty of an offence if: they knowingly do any unauthorized act in relation to a computer, and if the act causes, or creates a significant risk of, serious damage of a material kind if the person intends to do so or is reckless as to whether such damage is caused.

"The wider industry would be able to investigate further and gain more intelligence from interacting with compromised systems"

Time to Revise?
After revisions to the Act in 2006 and 2015, one company has set itself the task of pushing for major revisions to the CMA. Katharina Sommer is head of public affairs, and Ollie Whitehouse is CTO, at NCC Group. The NCC Group, admits Sommer, is struggling with the CMA and wants “to make a change to so as to make vital threat intelligence commercially and ethically easier.” 

Sommer says that a lot of the work that NCC does “is hampered by the CMA” and with revisions, “the wider industry would be able to investigate further and gain more intelligence from interacting with compromised systems.” She adds that while legal advice NCC Group has received suggests that the risk of prosecution is small, as a listed company, there is a provision for NCC not to break the law.

She says that Section One of the CMA outlines unauthorized access to computers, and that is what NCC is struggling with, as if you do not have “permission from the person who owns the thing [you are looking into] then you’re breaking the law.” This leads to problems with investigating botnets and criminal activities, Sommer explains, because “you’re not going to call cyber-criminals to get them to let you probe their infrastructure.

“The issue for security and vulnerability research within the CMA is about probing systems to find vulnerabilities, and the threat intelligence issue which is about exploiting vulnerabilities or weak configurations to gather intelligence and information about attackers and their targets.” 

Are You Keeping Up?
NCC’s argument is that the time passed since 1990 has allowed the CMA to be a legal framework, but it is time to revise it as “it has not kept up” with the changes in cybersecurity.

Security practitioner Daniel Cuthbert was found guilty under the CMA in 2004, after he gained unauthorized access to a charity website. He says that he feels it is time for the CMA to evolve, as “it was badly written at the start.”

He agrees that society needs people to find bugs without the worry of prosecution, and 15 years after his case, “we still have a lot of public services that are vulnerable and a lot of people go after researchers when they find these bugs.”

Cuthbert also says that there is a problem with the term “intent” and how it is defined within the CMA, as we do need a safe passage for researchers who are not interested in destroying people’s lives, and want to make them better. 

However, he does cast some doubt on how the CMA could be changed, as “it is all well and good to say we want to change it, but that is not how countries and laws work, as we need to get in front of a judge or the Home Office.” He says that the CMA was created as a “knee jerk reaction to what Gold and Schifreen had done, and no-one had considered how it would look from a legal perspective in the modern cyber-age. When Schifreen did his thing, there was no World Wide Web. When I did my thing, there was no Twitter – now the internet is such a critical part of our lives, and we need to have laws that reflect that.” 

"It is time for the CMA to evolve, as it was badly written at the start”

Case For the Opposition?
The proposal to review the CMA seems pretty solid, or so one may think. Speaking to Infosecurity, Schifreen says he had some contact initially with NCC Group about the revision strategy it had, and while he thought it was a good idea at first, he looked at what NCC group was suggesting and he decided he “didn’t want anything to do with it” as he didn’t believe that the problems with the CMA were severe enough.

He argues that, while the CMA is not perfect, the Act allows anyone to carry out penetration testing on his or her own network, “and I don’t recall anything in the NCC’s long list being a problem in my opinion.”

Jen Ellis, vice-president of community and public affairs at Rapid7, tells Infosecurity that NCC Group is effectively asking for four things, and some of them have issues. The first thing they desire is authorization for port scanning.

“The CMA doesn’t define what consent means when you’re talking about technical systems,” Ellis says. “So with regards to port scanning, if a system is left exposed to the internet, does that indicate implied consent? So clarifying this situation is non-controversial, as is allowing the scanning, provided it does not cause harm.”

The second thing that NCC Group is seeking is the ability to do a form of active defense by gaining permission to interrogate third-party assets implicated in security investigations. “It’s not entirely clear what is meant by ‘interrogate,’ Ellis claims. “If it’s beaconing, for example, the issue would be more about potential privacy violations than actual harm.” However, more intrusive forms of interrogation could potentially cause harm. “The bottom line here is if my mum’s computer is part of a botnet that is used in an attack, why does the investigating security company have a right to re-victimize my mum?”

Ellis argues that this seems to be the greyest area of NCC Group’s proposal, with the “least clear feeling on whether to support it.”

The third issue is permission for “light-touch interaction” of third-party assets, or a friendlier term for ‘hacking back.’ “It is extremely challenging to come up with good legal parameters to make this practical and avoid harm,” says Ellis.

The fourth thing that NCC Group is proposing is that security practitioners should be approved or certified to do this type of work. “The challenge is,” Ellis continues, “in all of the time I’ve heard hack back conversations, nobody has come up with a realistic framework for how this could work on an operational or legal level – who owns the certification process? How do you ensure skills and ethical standards are met and maintained? How does this impact legal liability should something go wrong?”

Ellis adds: “Today, if law enforcement wants to go and investigate or interrogate something/someone, there is a whole legal framework on how they can do that, including having to get warrants and meet oversight requirements. What the proposal is saying is ‘give us authorization to do this type of work and we will just figure it out’. The reality is that is not OK.”

Ellis argues that, overall, the CMA is not a terrible thing, as it is relatively clear and focused on the right things, and not overly heavy-handed. The main challenges are the lack of clarity around what consent means on the internet, and the lack of acknowledgement of the importance of security research. “However, there is a line between supporting good faith research, and authorizing hacking back. We should not cross that line.”

In response, Sommer says that the revision of the CMA is intended to clarify the grey area around port scanning, which “would help researchers to scan, probe and enumerate internet-exposed hosts and services in a way that allows issues to quickly be resolved.”

As for seeking permission to scan third-party assets that have been implicated in security investigations, such as PCs tied up in botnets, Sommer believes that, following extensive consultation with industry and legal experts, the main challenge associated with seeking additional authorization to scan third-party assets “is the administrative burden and associated delay in response and action.

“This severely limits the ability of specialists across the industry to provide high value services to clients. Therefore, giving the industry permission to scan ancillary networks, where they interface with in-scope systems for which authorization exists, would be beneficial,” she says.

As for the claims regarding NCC Group wanting permission to do “light touch interrogation” of third-party assets caught in security incidents, Sommer retorts that “vital investigations and threat intelligence work are often halted by the risk of contravening the CMA, which is why NCC Group would welcome revisions that allow researchers to analyze compromised systems.” She insists that this would be about allowing researchers to obtain information through known paths, rather than hacking back.

These suggestions are by no means definitive though, she admits. “These have stemmed from industry advice and recommendations, and offer new ways of tackling cybercrime. That’s why we’re working with academics and legal experts to form a suitable blueprint to make the CMA fit for purpose now and in the future.”

"There is a line between supporting good faith research, and authorizing hacking back"

The next stage for the CMA would be for a legal and political review, although Sommer concedes that because of the political situation over Brexit, this is an issue that has been pushed down the priority list, and so she’s not expecting any advancement for the next year or so.

However, in a world that has accepted the seriousness of cybersecurity issues, and with politicians and regulators now taking notice of security research, perhaps a step to stop charging researchers with crimes under a legislation from almost 30 years ago would represent a step forward.

What’s hot on Infosecurity Magazine?