The Next Big Lesson for Security: Context is King

Written by

Imagine if every time someone saw a person carrying packages out of a house, they assumed it was a robbery and called the police. The police department would be overwhelmed by false alarms that demand a response. It would be completely unsustainable. So why is it acceptable that the security world is full of low-confidence reporting? 

According to EMA, 92% of organizations receive up to 500 events per day. This is low-confidence reporting at its apex. Considering the average analyst can only investigate about ten of these per day (on the high end), this rate is clearly unmanageable. 

More recently, organizations have been working to filter these alerts, only targeting the most highly-critical warnings they receive each day. There are a few problems with this approach. First, in the same report, EMA noted that 88 percent of respondents still said they were receiving up to 500 severe/critical alerts per day. The volume may have been parsed differently, but it was still unsustainable.

Second, narrowing down your field of vision to immediate threats doesn’t solve the problem of mid- and lower-tier threats, which may not be as urgent but can still easily cause a security incident. Finally – and perhaps most importantly – prioritizing threats doesn’t get to the underlying problem – our urgent need for better context.

In just about every way, today we have the ability to see a more complete picture than ever before of the network activity, devices and people that make up our environment. Yet we still spew half-baked, low-confidence alerts in the name of speed. Ironically, the result is the opposite. Faster alerts are piling up on the desks of humans, who aren’t capable of the same speed as our robot counterparts. 

What we need to do is provide higher-fidelity alerts contextualized across multiple facets of a system. Alerts that only look at an event (whether network, endpoint or interaction) in isolation lack context, and are therefore much less likely to be reliable. 

For example, if you were alerted every time a new device entered the network, you’d be pinged every time a partner or customer visited the office with a computer and/or a phone, every time an employee brought their iPad from home, when someone logged off and back on with a new IP address – the volume would be immense.

If you were alerted every time someone sent data from the network, that volume would be unmanageable as well. However, if a new device enters the network and starts transferring large amounts of data to China – that’s something worth a second look. In isolation, these separate concepts aren’t particularly interesting, but when you start making those connections the larger picture starts coming into focus. 

In a way, context is the holy grail of the next generation of automated security solutions. So what’s holding us back? The biggest challenge is finding ways to identify these behaviors as related, even though they may be spread across different platforms, data sources and devices.

Ideally, we’d also be able to access temporal and/or content-based context about each isolated behavior to help fill in the gaps and identify trends. The path to providing unparalleled alert fidelity is connecting the dots between indicators of interesting activity across different aspects of an environment—from external to intra-network to device. Interoperability between products is extremely important to getting to this next level of security capability. 

While the police department and others in the physical world have figured out how to improve context through layers of human filtering, we need to figure out how to do the same through technology and automation in order to improve security, As our technology and algorithms get better, expect security teams that nail cross-technology integration, contextualization and interrogation to win the day.

What’s hot on Infosecurity Magazine?