In Search of an Ethical Code for Cybersecurity

Written by

Ethics plays a big part in many lines of work, but what about information security? Danny Bradbury explores whether those working in cybersecurity should have a code of conduct too.

Ethics plays a big part in many lines of work. Doctors can be struck down for violating codes of conduct, and lawyers can be disbarred. Journalism, too, has many ethical codes that overlap.

Shouldn’t those working in cybersecurity have a code of conduct too? The stakes are rising in this field, as attackers and defenders alike gain increasing power over our systems and data. For example, 40 years ago, most work was still manual and desktop computers were a hobbyist pursuit. Cybersecurity didn’t matter as much. Now, attackers can gain control of everything from our homes to our pacemakers electronically.

Cybersecurity practitioners and researchers are working at the cutting edge of a highly adversarial industry, and dealing with a range of thorny problems.

Is it OK to disclose a vulnerability before a vendor has had a chance to patch it? Is it permissible for a botnet researcher to take control of a compromised machine for the greater good, or to disrupt its command and control structure? Should we buy exploits from cyber-criminals to make them available for the public to protect themselves? What about striking back against attackers who are stealing our data and wrecking our systems?

Each of these questions is complex, with a lot to unpack, and some of them come up regularly in practice. Google drew flak from Microsoft last October for publicizing a local privacy escalation bug in Microsoft Windows. Redmond said that it endangered customers. Google had angered Microsoft before, and has since published information about more zero-day vulnerabilities in its rival’s systems.

The guidelines for disclosure are either unilateral (Google has its own) or de facto. Anonymous hacker Rain Forest Puppy published a policy on responsible disclosure, but no one has officially ratified it.

"People are doing some things that are taking too much risk without really being able to justify it"

As to buying exploits from criminals, Matthew Hickey has lots to say. The co-founder and director of cybersecurity services and training company Hacker House recently set up a crowdfunding campaign to purchase vulnerabilities from anonymous hacking group the Shadow Brokers. He promptly closed the campaign before making any purchases, citing legal concerns.

Although legal worries stopped his initiative, Hickey still stands behind the general concept. “What is better: a tool only your adversary knows about, or the one everyone including your adversary knows about?” he asks. “If we do not have access to data that criminals have access to, then we are put at a significant disadvantage when trying to defend networks.”

There are several best practice documents in specific areas of cybersecurity, says Hickey. He gives an example of the Hippocratic Oath for Connected Devices, created by I Am the Cavalry, a grassroots organization focused on issues where computer security intersects public safety and human life. The Oath is a set of cybersecurity guidelines for security and technology professionals in the medical device world.

What about an even broader set of guidelines covering cybersecurity practice and research?

“I would be a supporter of a code of conduct if it was not used to enact further legislation. If anything, technology needs fewer restrictions instead of more cumbersome laws which hinder innovation and prevent growth in the marketplace,” Hickey argues.

“As someone who develops exploits and attack tools for lawful use in engagements, it’s important to understand my responsibilities and options as well as what best can be done to ensure those tools don’t cause further harm”, he adds.

Ethics Versus the Law

When it comes to cybersecurity, Sandra Braman, Abbott professor of liberal arts and professor of communication at Texas A&M University, draws a distinction between ethics and the law.

“Codes of ethics are associated with professionalism; responsibilities of people who are members of specific professions and for whom accreditation by that profession requires signing on to, and adhering to, codes of ethics,” she explains, adding that the law will take cybersecurity practitioners further than ethics. “In the cybersecurity domain, such professions don’t exist in the sense that such a wide range of types of jobs, skills and people are involved.”

Congressional members hope that the law can address one of the stickier problems in cybersecurity: hacking back.

Today, US law simply forbids private organizations from tampering with each other’s networks – even if one of them is attacking the other. Companies are getting frustrated about that, though. Attackers are hitting companies harder than ever before, stealing their customer records, disrupting their operations and blackmailing them by holding their data to ransom or threatening to publish it.

Tom Graves, the Republican Senator for Georgia, has proposed a bill that would let companies take limited actions against attackers in cyberspace. The Active Cyber Defense Certainty Act would give private companies legal immunity in some cases, should they choose to strike back against online attackers in certain ways.

The Bill would amend Title 18 of the United States Code (that’s the main criminal code), making it possible to use ‘attributional technology’ – code that spies on attackers. It also lets victims access an attacker’s network to prove their criminal activity, monitor their behavior and disrupt unauthorized activity against the victim’s own network.

Lawmakers have proposed similar things in the past, says Dave Dittrich, cybersecurity researcher at the University of Washington Tacoma’s Center for Data Science, but “this is the closest it’s been so far to an actual bill being submitted to Congress.”

Dittrich believes that companies may already need to step outside the existing law to protect themselves in cyberspace. He points to the idea of prosecutorial discretion – where prosecuting state attorneys can simply decide to adjust charges and sentencing, or simply not prosecute at all, based on circumstances.

“I believe there are times when people need to do things that violate the law or may be in a grey area, but when doing so they should know what they’re doing”, Dittrich explains. What would they be doing? He describes active defense as a continuum, starting with the least harmful actions (passively gathering information). From there, it escalates through infiltration, manipulation, takeover, takedown and complete eradication.

“They should be able to justify it, stand up in a court of law and say ‘this is what I did, this is how I made sure that I was doing the best thing possible with the least amount of harm.’” This is where ethics – principles of conduct governing an individual or group – does come into play. Yet there’s no code covering the ethics of hacking back in private organizations.

The lack of an ethical framework, combined with companies going outside the law, leads to some dangerous situations. “There’s no standard mechanism for evaluating what you’re going to do,” he says. “In my view, people are doing some things that are taking too much risk without really being able to justify it, or knowing that there are other options that could achieve the same goal with less risk.”

Examples of dangerous activity could include altering files on a computer that had been used to launch an attack but which was owned by an innocent party. Imagine if the alteration inadvertently crashed a computer responsible for keeping a medical patient alive. At the very least, private companies considering moving along the active defense spectrum should report the attacks against them to the authorities before taking action, Dittrich says.

The ACDC bill requires victims to notify law enforcement prior to taking action. The notification requirements are extensive. Victims must describe the breach they suffered, the intended target of the countermeasures, how the victim will preserve evidence of the attacker’s intrusion and steps taken to prevent damage of the intermediary computers.

"Technology needs fewer restrictions instead of more cumbersome laws which hinder innovation and prevent growth in the marketplace"

From Civilian to Military

Whereas legal rules prohibit private sector companies from manipulating anyone’s network without authorization, things are more permissive in military circles. Braman draws on the idea of proportional response, which is well-established in that world.

“If a botnet is used by a foreign party to, say, disrupt a nation’s financial system, under international law it is permissible to engage in a proportional counterattack – if you can figure out who is responsible”, she says.

The Bible relating to rules of engagement in cyber-warfare is the Tallinn manual (recently published in its second version), which is a summation of international law as applied to cyberspace.

Not everyone believes that summarizing and interpreting international law is enough, though. George Lucas, the Stockdale chair in ethics at the Naval War College, says that it’s difficult to get buy-in from all stakeholders for such efforts, especially when the nature of cyber-threats and attacks are moving so quickly. In the past, Chinese academics participating in the Tallinn 1.0 process criticized it for being too western-centric.

Tallinn is a personal interpretation of the law by legal experts rather than a binding project representing the views of nation states. In addition to Tallinn, Lucas calls for a more basic consensus on no-go areas for inter-state cyber-attacks.

“Before you write any legislation or treaties, you’re trying to work out what these new norms of responsible behavior would be. Mostly that’s voluntary compliance, or best practices,” he says. “Even if it sounds weak and mushy, it’s much easier to get buy-in on those things and build a scaffold for a regulatory environment than it is to start with trying to write a lot of regulations.”

He’s talking, in short, about what lawyers call ‘soft law’ – or ethics.

The picture around ethics in cybersecurity shifts depending on who’s involved, and what they’re doing. Civilian and military domains all have separate frameworks into which these discussions must fit. Similarly, things morph when focusing on practice versus research.

The primary reason that there is no single code of conduct to bind them all is that cybersecurity is a vast, constantly evolving discipline with many moving parts. The industry is tackling these problems on a piecemeal basis – and that will probably continue for years to come.

What’s hot on Infosecurity Magazine?