Fixing Risk Management

Written by

I am not satisfied with the way we (in the industry) are doing risk management. In my early days, before I was actually entering the security space, I was doing project management and as part of it, risk management. The way we did it was fairly simple (as probably most of you do): We had an impact on high/medium/low and a probability. We were fairly sophisticated – the probability was a percentage number. I often said, that we do not know whether a 50% probability really has the probability of 50% but we were fairly confident that 50% was more than 40%. Basically it seemed to work fairly well but it was not really satisfactory – I just did not have anything better.

Then I was starting to work in security and saw these models called “Return On Security Investment” – ROSI. There were models which were fairly simple ($ impact * probability is the cost) up to very sophisticated and complex models. I never liked them and I was fairly vocal about them. The reason was fairly simple: garbage in, garbage out or to take a different equation I read recently:

garbage * garbage = garbage2

As we do not know the impact (what was the impact of Blaster to Microsoft and our reputation in $?) and we do not really know the probability, the formula mentioned above seems fairly accurate but you can calculate the garbage to two digits behind the comma.

When I tell customers that they should do more risk management, they sometimes ask me a simple question: How? and I fall short of a really good answer. Sad smile.

However, I am now closer to an approach. I recently read a book called The Failure of Risk Management: Why It's Broken and How to Fix It, which changed my way of looking at things. Actually it changed earlier when I read How to Measure Anything: Finding the Value of Intangibles in Business by the same author (Douglas W. Hubbard). The basics behind is to look at what you measure (e.g. the risks) from a perspective of a statistician. Being an engineer, I hated statistics at the university but I think I should have looked at it much more. There are a few fundamental claims he makes in my opinion:

  • We do not work with clear figures (e.g. 40%) but with ranges, where we estimate a 90% probability of the real figure being in this range (a confidence interval). If I would ask you, how likely a virus outbreak in your network is, you will not be able to tell me 30% but you might be able to tell me that the probability is between 20% and 40%. The same with the impact: You might be 90% confident that the financial impact of an outbreak is between $x and $y. If you are an expert and did some training on that, this is feasible.
  • As soon as we think about finding data to underline our estimate, the goal is not to find an exact number but to reduce the size of the interval (the uncertainty).
  • You should be able to focus on the most important ranges and not on what is easiest to manage. He shows a way to actually measure the value of information.
  • Focus on the values in your model, with the highest uncertainty – where you have the least data.

Once you build a model and define these ranges, what do you do then? Well, there is a technique in statistics called Monte Carlo Simulations. Based on the ranges with this method, we can start to calculate a distribution of the outcome. There is even a possibility to model complex systems and systems, where different events correlate.

Using mathematical methods – as we use to model other systems as well – might (or I would even say will) be the right path to move on. We have to move from art to science.

Roger

What’s hot on Infosecurity Magazine?