Money continues to be spent on security solutions and services, but is there a return on investment? Wendy M. Grossman looks at the case for security economics and whether spending to defend really adds up
Estimating the cost of cybercrime is always tricky. In 2014, the Center for Strategic and International Studies put it at $445 billion globally and called it a "growth industry". In June 2016, the Ponemon Institute found that the average cost of a data breach for the 363 companies it surveyed was $4 million, a 29% rise since 2013. Ponemon also estimated the chance of a data breach involving 10,000 or more records at 26%, a likelihood that declines sharply as the size of the breach increases.
In the years it has conducted this type of research, Ponemon notes that the costs of data breaches have not changed significantly and the reports therefore concludes they are a permanent cost of doing business.
Estimates of how much we spend on security are more clear-cut. Cybersecurity Ventures expects worldwide spending to reach $1 trillion for the five years between 2017 and 2021, though even that number doesn't include consumer costs like post-breach recovery and personal identity theft protection services.
It doesn't help the case for security spending to see that many companies – LinkedIn, Sony, Target – that have been the targets of large, highly publicized breaches have survived reasonably well. Eldar Tuvey, whose company, Wandera, uses the cloud to block attacks on mobile in real time, has been observing the impact of beaches on companies for more than 15 years.
Despite some companies' survival rate, he says: "The secondary costs of a breach – reputational cost, credibility, brand – are sometimes existential for a corporate." According to Tuvey, a key problem for most businesses is the increasing complexity of networks and supply chains: "I don't think any one player can be an expert in all these areas."
Complexity is also the biggest issue for Ottavio Camponeschi, VP for EMEA for the security vendor FireMon. "The environment right now is so complex it's almost impossible to manage," he says. He believes that it's essential to simplify and smooth workflows to make it easier to correlate and analyze different data streams.
"Every time customers add something – an application, a company – they're building workflows that are carrying security holes," he says. "Firewalls are carrying rules and policies that are ten years old. How effective can those be?" Sometimes, he adds, "They're built for a specific application which is no longer used inside the infrastructure – and the people that built it have maybe left the company."
"A lot of the people holding the reins don't always know what the shrewd investments are," says Trustwave's EMEA director, Lawrence Munro. "Technology is only as good as the people who operate it, and tuning is very important." If, he adds, the technology is sending out thousands of alerts, it will get turned off very quickly. This is one area where researchers such as Miranda Mowbray at HP Labs in Bristol hope that machine learning can play a part by vastly cutting down the numbers of false positives.
Munro offers practical advice: fix things as early as possible and embed security as early in development as you can; spend the money you need on the right people for the job; and evangelize security at all levels of the business. "Security is everyone's responsibility," he says.
All of these approaches tackle the practical aspects of how you allocate your available resources so they're not wasted. Business managers and security practitioners wrangle over this every day: what money needs to be spent on which technologies and practices, to defend against what threats?
Bellovin argues that what's crucial is understanding who might be targeting you and why. If your attacker is a nation-state, "the more you're going to spend and the less you're going to get for the money". If you really are such a target, he suggests strategies such as pulling a machine at random and taking it apart down to the bits to see what you find.
The deeper aspects of what Bellovin is saying, however, are more theoretical: applying the discipline of economics in order to understand how misplaced incentives make security fail in unexpected ways. Because: if spending money on security doesn't make you safe, why do it? How do you make the case if you never really know what your money bought you?
"You can point to specific attacks and specific defenses and say ‘this defense will stop that attack’, but attackers are adaptive, and if they want to get you [specifically] they will move on to the next attack," Says Bellovin.
However, it's also easy to err in deciding whether or not you're a target. Bellovin's example: a threat intelligence company determined that computers in a small Wisconsin welding shop had been penetrated by Chinese hackers and used as a stepping stone. There are three likely scenarios. First, the attacker chose this specific company because of its relationship with a certain, larger target. Second, the attacker thought the company might have interesting customers, and then chose one that looked worthwhile. Third, the attacker operated randomly, and then explored the possibilities.
Determining which might apply to your specific case requires the engagement between security, technology, and business people to assess the industry and the competition, as well as the technology landscape.
The theoretical aspect of security economics has been growing quietly in the research community ever since Cambridge University professor Ross Anderson and Google's chief economist, Hal Varian, co-chaired the first Workshop on the Economics of Information Security in 2001.
"As techies, we were trying to figure out why the stuff we were doing wasn't working the way we thought," says Bruce Schneier, a WEIS co-founder. "It turned out there were economic reasons." The reasons why money is misspent varies, but "There are a bunch of examples of security failures which are not technology failures but economic failures." Incentives may be in the wrong place, or network externalities mean that the people who shoulder the costs are not the ones who suffer when security fails.
As an example, Schneier cites the length of time we had to wait for viable solutions to spam email. Although it was a persistently growing problem for both ISPs and individual users, workable solutions that could have been installed in the backbone carriers were never adopted: "They don't have any economic interest in seeing that you don't get a virus." It wasn't until Gmail and Hotmail aggregated large numbers of users that these backbone solutions were deployed and users' inboxes became manageable again.
There are many examples like this. One reason – to answer the question we began with – it's important for everyone to pay attention to security as that with today's complex, interconnected partners and supply chains anyone may provide the vulnerability that makes someone else suffer. The 2013 Target hack is a perfect example: the attacker's entry point was stolen credentials from a heating and air conditioning contractor.
The earliest work in this field is usually dated to Angela Sasse's 1999 paper "Users Are Not the Enemy". BT had asked Sasse to study the question of why its staff was so incapable of remembering their passwords. Sasse's resulting study became the first research to consider the role of usability in effective security – people couldn't remember their passwords because there were too many, they were too complicated, and they had to change them too often, all problems that persist today because "best practice" has not changed.
Further, economics also featured: Sasse's commission was a response to pressure from the accounting department to do something about the fact that the cost of the help desk was tripling every year with no end in sight.
"They said, 'Figure out what's going on – and stop it'," she says.
More recently, Sasse has headed the RISCS Institute, a collaboration among five universities with the goal of putting a solid scientific evidence base under information security. Her particular project, Productive Security, sought to establish “how to” device security practices and policies that make it easier, not harder, for users to do their real jobs.
Most of the papers presented at WEIS every year are too descriptive to provide practical advice to practitioners in the field, but their influence is spreading into projects – such as RISCS – that do produce practical advice and usable tools.
The usefulness of economics in understanding and predicting security behavior stretches beyond simple cost-benefit analysis, though that's important, too. As Sasse says, based on her years of research inside companies, because many security people spend their time in their own silo, "They feel like because security is important they don't have to think about the costs."
In the end, Bellovin says, we keep spending because, "It does provide benefit – not as much as we'd like, but outrunning the other guy is still a good thing."