Intelligent Design: The Evolution of Security Technology

Patricia Titus’ advice is simple start with protecting data that is critical, and patch areas of known, unavoidable vulnerability
Patricia Titus’ advice is simple start with protecting data that is critical, and patch areas of known, unavoidable vulnerability
Trending: Data Breach Complexity
Trending: Data Breach Complexity

We need a new model of cybersecurity, RSA’s executive chairman, Art Coviello, said in opening last autumn's RSA Europe conference in London. “We need a collaborative understanding of the threat we are facing and the enemies we are fighting.” His answer: intelligence.

A few minutes later, RSA president Tom Heiser followed up: “We are moving to an era where intelligence-based security is no longer an option – it’s a requirement.”

Statements like these evoke a person’s inner skeptic. Another new model of cybersecurity? What happened to the old one that was going to solve all our problems? Is that no good anymore?

Solution Redux

“The cynic in me”, says Mike Small, an analyst with Kuppinger Cole and a member of the London Chapter of ISACA’s Security Advisory Group, “says that these are the same people who told me all I needed was anti-virus, then a firewall, then intrusion detection and prevention, then security information and event management, and now it’s intelligence.” And yet, there’s no doubt that attacks are increasing in complexity.

“There’s a sort of arms race going on”, Small acknowledges. “If you look at recent high-profile events like the RSA breach, what we’re seeing is sophistication on the part of the hackers where they are now focusing on a particular organization with particular assets. They are prepared to spend time figuring out how to get through into the organization and cover their tracks afterwards. That makes things very difficult.”

Similarly, Andy Kellett, a principal analyst at the research firm Ovum, cites the Verizon Data Breach Investigations Report released in 2012, which “suggested that we’re actually getting worse at spotting data breaches and that they’re taking longer to identify and resolve”. That being the case, “the fact that we’re not doing very well is driving the industry to look at the use of analytics, intelligent solutions, and making better use of the data. The question is, do we trust them to do that very well?”, Kellett ponders.

Pinning down exactly what is meant by ‘intelligence’ isn’t all that easy either, as Sal Stolfo, a professor of computer science at Columbia University, points out. “It is very hard to know precisely who is doing what and how without a court order under discovery”, he says. Stolfo observes that anomaly detection, plus reputation-based detection and gathering information across many customers to share with them all for mutual benefit are common techniques. Legacy flagship products will persist, he notes, if only because they provide compliance with legal and “best practice” requirements, and also because “they filter the background radiation on the internet”.

Saying Goodbye to an Old Friend

Anti-virus software, the line of defense most people started with, has not gotten a good rap lately. A 2010 Cyveillance study found that vendors detect less than 19% of malware attacks on the first day they appear in the wild; after 30 days, it catches 61.7%, which means it still misses more than a third of these threats.

Even anti-virus vendors themselves admit to limitations: in June, Mikko Hypponen, the chief research officer of F-Secure, described the failure to flag and study the Flame virus as “a spectacular failure for our company, and for the anti-virus industry in general”. When they looked, he said in a widely cited Wired article, his company and others found they had received samples as early as 2010, but their systems never flagged them as something to look at closely. Similarly, he noted, Stuxnet went undetected for more than a year, even though one of the zero-day attacks it employed had been used before (but not noticed at the time).

“The volume of threats and daily changes happening have forced a new model to be put in place”, says Gerhard Eschelbeck, the chief technology officer for Sophos. “The new model means security systems continuously learning in an automated fashion about how the threat is changing.”

A key element for Sophos, he says, is the adoption of a centralized infrastructure that can scale and be accessible to all of the company’s customers. What started as an end point solution that just downloaded updated virus definitions is now “a distributed system where we have software on the end point that looks at the behavior of software as it’s executing, both good and bad, and then a bigger portion of intelligence in the back end”, Eschelbeck relays. Communicating in real time with so many customers, he says, “gives us a tremendous amount of visibility and an aggregate view. We can make very good decisions – and can even predict some of the things that will happen next based on events in the past.”

Thinking Ahead

Eschelbeck and representatives of other vendors are, of course, not wrong to say that the threat is evolving. In particular, as attackers’ motivations have shifted, “The threat has changed from very loud, visible, and directly recognizable to stealthy and sophisticated”, he asserts.

At the same time, organizational networks have become more heterogeneous, and the perimeter is vanishing as users bring their own devices – all of them storing confidential corporate data and all of them, eventually, likely to be attacked, even the relatively well-managed and confined iPhone. “Every successful platform in history has come under attack, and the iPhone is not going to be different”, he predicts.

Patricia Titus, the chief information security officer for Symantec, says the company is its own first and biggest customer. Titus believes that too much consideration in the past has gone to solving the latest urgent problem without thinking through the consequences for the future. As an example, she cites the US Department of Veterans Affairs: a lost laptop sent the US government through a whirlwind of adopting and deploying hard disk encryption. In the process, they forgot about productivity issues, including presentations; and speakers lacked the decryption keys to unlock the presentations on their USB sticks.

“Today we have to be smarter”, she admits. “We’re all dealing with less resources, less budget. We have to do more with less, and we have to look at the end-to-end lifecycle of what we deploy in a company.” Above all, “We have to understand business better than we have before”.

The point Titus makes about intelligence is that better analytics should identify the elements that really need human attention, using pattern recognition to find the serious problems as they develop so that resources can be focused on them rather than on repetitive functions, such as patching software. Similarly, she wants to see people protect the critical data, not the “costume jewelry”.

Titus concludes, “A company is like Smaug”, referring to the dragon in The Hobbit whose armor seemed complete to the untrained eye but had a bare patch at his left breast where a well-placed arrow could penetrate and kill him. The attention, therefore, should be targeted on those patches of unavoidable vulnerability.

‘Ordinary’ Intelligence

Although the most complex attacks are indeed highly multi-layered affairs – a spear phishing attack here, the results used to open a hole over there sometime later, the hole’s exploitation being stealthily mounted much later still – the Verizon DBIR also plainly states that a high percentage of breaches continue to be the result of ordinary, basic things.

In October 2012, the Greater Manchester Police in the UK were fined £120,000 (~$192,000) after losing a USB stick containing unencrypted data on more than 1000 people; in 2010, the same police division was hit by the Conficker worm when someone plugged in an infected USB stick.

Yes, the Verizon DBIR talks about the rising threat of hacktivists, who were responsible for 58% of all data thefts, but the big takeaway was this: 96% of attacks “were not highly difficult”. The report puts it bluntly: “Most victims fell prey because they were found to possess an (often easily) exploitable weakness rather than because they were pre-identified for attack.” Where intelligence arguably might help is in detecting the attacks, since the report also notes that in 92% of cases, organizations were unaware they’d been attacked until a third party discovered and reported it.

The problem, as always, will be implementation. “Most medium-sized organizations don’t have the capacity to do what is being talked about – they don’t have the skill to poke around inside their systems with some of these tools to decide whether they’re being hacked, so it’s a bit of a difficult message”, says Mike Small at KuppingerCole. “The people who will benefit are large organizations who have the capacity to use the information.”

Yet, a bigger problem may be this: experience has shown that software that tries to be ‘smart’ is also typically less predictable.

“Cleverness is a risk”, comments security consultant Alec Muffett, citing as an example past occasions when adaptive security tried to do ‘intelligent’ things like block off connections from IP addresses that seemed to be the source of attacks. Immediately, he says, such systems were gamed so the site being disconnected was that of a vital business partner.

For reasons like this, John Walker, chair of the London Chapter of the ISACA Security Advisory Group and CTO of Secure-Bastion, believes that today’s claims of intelligence are more marketing than reality. “They are trying”, he says of the larger legacy vendors, “but as time has proven, they are not agile, are tied to quarterly reporting, and at times only seem to be in it for the immediate return.”

What’s hot on Infosecurity Magazine?