Rely on Continuous Improvement and Not Biases to Apply Cybersecurity Best Practices

Written by

Even with GDPR in place, high-profile breaches are still occurring. In the UK, the average cost of a data breach has grown to nearly £2.7 million, according to research by IBM. A prime example is the British Airways breach, in which half a million records were exposed when bad actors diverted traffic to a replica site and stole the customer data entered there. 

When bad things happen to good companies, it’s a normal human impulse to look for one person to blame. This oversimplification leads to a false sense of security and completely misses the issues that led to events like successful data breaches in the first place.
 
Three biases against human error 
In his book, The Field Guide to Understanding “Human Error,” Sidney Dekker captures two views of human error. Human error is often targeted as the cause of incidents, but by chalking it up solely to this, we stop investigating other factors surrounding the human at the center of it: their environment, their employer’s policies, employee training, security controls, social requirements, political and workplace pressures, budget and resourcing decisions, and so on. These factors are called systemic factors.

In addition, there are three prejudices or biases that people bring into any examination of  
errors and mistakes. The first is what’s known as hindsight bias: our exaggerated and overconfident ability to predict and prevent the unwanted outcome. The second is outcome bias, which is the notion that we tend to lavish harsh judgment or punishments on those humans we blame. 
  
Humans tend to focus on the last event in a series of events – that’s our third bias. How many times have we yelled at our TV when the striker misses from the penalty spot and our club loses the match? We blame the last event and forget all about the other missed passes, shots and giveaways. We forget about the team (systemic issues) and focus on the last event (the human error leading to the unwanted outcome).

Circling back to cybersecurity 
We bring these biases into cybersecurity investigations, too. In rare cases, the guilty party is caught and brought to justice. Sometimes it’s a careless or rogue employee who gets fired. The company blames the hacker or the employee, the last actor, which by inference exonerates the company’s actions and systems.

Should they be exonerated? What controls were put in place? How did the rogue employee conduct their nefarious business without detection? That’s where the focus needs to be.
 
Getting rid of the employee or bad actor will probably swing the organization’s priority to self-defense. Similarly, terminating a business contract that bound the vendor to the client will likely shift the vendor’s focus to avoiding legal liability, which makes them less open and cooperative when it comes to the investigation.

In fact, a study of 650 security and IT professionals indicated that 44% of their companies experienced a material breach via their vendors, and about 49% terminated the contract. Organizations should worry less about getting back at the culprit and more about protecting their business and ensuring that a repeat of the breach doesn’t occur.
 
Addressing the real issues
Now comes the true and difficult business question: how do we get past these innate biases and get to the root causes? In other words, how do we learn from our mistakes instead of continually repeating them?

The airline industry provides use with a great example. For instance, the UK’s Civil Aviation Authority (CAA) and other EU member-states have national authorities who act as agents for the European Aviation Safety Agency (EASA). EASA sets policy and regulation, while the local agencies investigate incidents and then make recommendations, creating an ever-improving cycle of air travel safety.

When will we do the same when it comes to the ubiquitous and destructive financial and personal consequences of a data breach? This isn’t about blame. This is about continuous improvement and protecting consumers’ rights and privacy.

Conducting public inquiries into major security events has its own challenges. There are a lot of barriers to cross. First, no company wants its shortcomings spilled out on the floor of public investigation. Also, their lawyers will fight to ensure such liability-creating issues aren’t exposed to the light of day. It will take government to legislate and mandate such investigations beyond what the Competition and Markets Authority does today. 

The US Office of Civil Rights findings on the Equifax breach provide specific measures that the US government mandates the company to adopt. What it doesn’t do is look at trends and correlate issues to develop simple-to-understand guidelines for companies to follow and for consumers to use as a guide to their rights.

That would be a whole lot more useful but change often comes slowly. In the UK, the Information Commissioner’s Office tends to take an approach that’s more outcomes and compliance-based, rather than a rules-based approach to implementing regulation. It still remains to be seen what will come from the ICO’s announced intention to fine British Airways for the 2018 cyber incident that compromised the data of some 500,000 users.

Year after year of breaches involving millions of records – not to mention billions of pounds lost - haven’t prompted any massive transformation, so it seems hard to imagine what would. Changing those biases that make us short-sighted and focused on the perpetrator instead of our own cybersecurity shortcomings is a place to start. A breach is never about just one person; there’s a whole system that needs to be examined. 

What’s hot on Infosecurity Magazine?