How Much Visibility do we Really Need?

Written by

Visibility into the IT environment is one of the most critical elements for maintaining a healthy and secure network. Whether it’s IT operations teams who want to streamline performance and decongest traffic, or security professionals who want to check for malware travelling into their networks and critical, sensitive data flowing out of them—the network holds the key.

Furthermore, the ability for cyber-criminals to sign their malware with legitimate certificates, and hide it within encrypted traffic is a real, and frequent, headache for anyone involved in protecting an organization from hackers. 

According to SDxCentral’s 2016 Next-gen Infrastructure Security report, lack of visibility is the largest security challenge to many, with 49% of respondents saying as much. The need for this kind of broad visibility becomes more imperative with every mega breach, which may have been avoided if only the victim’s organization had been able to see more clearly into their network traffic. 

With new breaches daily, those fears are legitimate and the obvious answer to confronting those fears head on would be gaining as much visibility into traffic as possible. Yet that most obvious answer might not be the best one, nor one we might be able to choose going into the future.

In some ways, we are heading away from a visibility first landscape into a more privacy-focused one. Consider the advent of Perfect Forward Secrecy (PFS) protocols and the potential fading of time-tested RSA keys. 

PFS handshakes prove troublesome for network monitoring, especially in analysis of payload of transaction level details. It takes a great deal of power away from the IT team who want to be able to streamline performance and, quite understandably, protect their network from data breaches. 

The RSA cryptosystem – just like all asymmetric key systems – requires that private keys be kept secret. If one of those private keys were to slip away from their holder somehow, it would be easy for an attacker to decrypt all prior communications. 

PFS takes the capacity for error largely out of humanity’s clumsy hands. A unique session key is generated for each session so, even if an attacker were to get their hands on a private key, they wouldn’t be able to decrypt any of the prior communications encrypted with the private key. 

This is where data protectors and IT operations teams have different views. Attackers will not be able to decrypt your data, but then again neither will your own IT team. This proves problematic when you want to be able to detect and identify anomalies. If your job is to ensure the smooth running of your network, then not knowing what is contained in bypassing traffic is a real problem. 

The more security minded should think about whose visibility they’re really enabling here. The Heartbleed vulnerability ranks high as one of the most malignant bugs of recent memory. It was used to attack organizations around the world and across industries from the Canada Revenue Agency to Tumblr. 

Heartbleed stole a variety of critical information including username and password data from vulnerable servers, but perhaps most importantly, it stole private keys. What's more is that if an attacker has been recording traffic to a particular target, they could decrypt all of it with that leaked private key they had just obtained. 

PFS is the protocol that can help solve this problem. Because keys are issued for each session, and none of that information is sent over the transport layer, there simply isn't much for an attacker to steal. 

But whether we like it or not, we’re still going to have to deal with it. PFS is increasingly being adopted by the tech industry’s larger players. Google, Twitter, WhatsApp, Facebook Messenger and the Wikimedia foundation have been offering PFS for several years now and as of the beginning of this year, the Apple Store has required its App store apps to use PFS supporting protocols. We expect others to follow, if fact, big movers and norms within the tech industry are already changing.

In 2014, the Internet Engineering Task Force decided to get rid of RSA keys when working on TLS 1.3, and maintained that only PFS supporting protocols would be allowed in TLS’ new iteration. 

Even within visibility, we could soon be able to cut out much of the noise IT ops teams see. With the acceleration in machine learning we will be able to change much of that inspection to automated processes merely by analyzing normal behavior and inspecting the anomalies. Decrypting that traffic and then personally inspecting them becomes a case of gaining visibility if and when required.

Who knows? In the near future, that trade-off between glaring visibility and encrypted opacity may not require choosing one over the other. 

What’s hot on Infosecurity Magazine?