Why is ‘Attribution’ Still the Focus Following Cyber Attacks?

Written by

WannaCry, last year’s cyber-attack which caused significant disruption for the NHS and numerous other organizations worldwide, hit global headlines when everyone was trying to figure out who was behind it.

Following the outcry, Britain's National Cyber Security Centre (NCSC), which led the international investigation, stated that the WannaCry malware was initiated by Lazarus – a group that is widely believed to have links to North Korea and which many believed to have been responsible for the 2014 cyber-attack on Sony Pictures.

Following every large-scale cyber-attack and data breach, the immediate focus is generally on attribution. Yet, why is there a wide-spread interest (not just within businesses) in talking about the ‘who’ when we should really be focused on fixing the problem?

Really accurate attribution is, more often than not, an almost impossible and risky task which is why it’s important for businesses to realize that the question of “who did it?” shouldn’t be at the top of the list in their post-attack analysis and response. Instead of getting too hung up on the issue of attribution, there are a few more important things that network defenders need to know when it comes to looking at an attack in order to make sure it can be defended against in the future – namely, the capabilities and technologies behind it.

Malware reverse engineers can often identify indications of code origin through comparison of code segments. With Lazarus Group and WannaCry being a good example of this, as the Destova wiper component was identified as part of the WannaCry ransomware package – a very specific module that had been observed in other malware attacks also attributed to Lazarus Group.

Yet, while there can be a high level of accuracy achieved by such reverse engineers, there is always a necessary trust barrier to attribution, with numerous and competing agendas for government departments, security vendors and independent malware researchers alike. Add to this the possibilities of ‘false flags’ or copycat attacks, and it becomes clear why most good analysts won’t trust attribution claims when they are first announced.

This happened with TV5Monde and the Cyber Caliphate attack back in 2016. The attack was initially claimed by ISIS, yet subsequent investigations discovered a change in tactic, and Russia was then alleged to be at the source of the attack.

What needs to be understood as being far more important than the search for attribution is gaining a better understanding of the capability of attackers and the technologies that are behind a specific attack.

The reality is that there are usually far more pressing concerns to deal with before we get fixated on the “who”. There are still too many organizations and businesses out there who don’t have substantial patch coverage on their externally facing infrastructure.

Poor patching regimes can make it reasonably easy for any cyber threat actors to leverage a low equity attack against these organizations with high probability of success. This is why it’s still essential that these types of low equity attacks are on a businesses’ radar. While, at the same time, those larger organizations with significant budgets have to continue to innovate on behalf of the whole community to detect, prevent and share the details of more sophisticated attack vectors.

North Korea is a good case in point here. On multiple occasions, our obsession with “who” is behind an attack, instead of the “how” the attackers were doing it, left North Korea with a sandbox to deploy, test and adjust the level of sophistication of their attack capabilities.

For businesses who have experienced an attack, it’s vital that the capabilities and technologies are a focus as that’s really the only way we can learn how to defend ourselves against attacks in the future.

What’s hot on Infosecurity Magazine?