When Analyzing Software Risk, Don't Color By One Number

When prioritizing which software security bugs to fix first, a lot of companies use a basic industry-standard metric. As those bugs continue to pile up, it’s time to take a more nuanced look at them and look further than just the numbers.

When someone reports a bug, the standard way to do it is to get a CVE number. Launched in 1999, the Common Vulnerabilities and Exposures system is a government-funded framework for reporting and detailing software vulnerabilities.

CVE vulnerabilities have soared in recent years. Back in 1999, there were just 894 of them. By 2016, that number had risen to 6447. Then, things went wild. In 2017, the number of CVEs jumped to 14,714 and to 16,556 the following year. Last year saw it fall off again to 12,174, but it still represents a monumental increase over time.

With the number of CVEs ballooning, a smarter approach to triaging them is becoming more important than ever. Companies simply can’t deal with them all straight away.

When patching system vulnerabilities, many security teams turn to a well-understood metric: the CVSS score. It’s a summary of several factors including how likely something is to be exploited, and the impact of that exploit. However, not everyone is happy with it.

For example, Carnegie Mellon University’s Software Engineering Institute published a white paper in late 2018 arguing that the scoring system focuses on bug severity rather than real security risk. The scores don’t account for the context in which a vulnerability might be used, it warned. The real cybersecurity risk from a vulnerability has a lot to do with whether it’s part of a chained attack with others, for example. The paper’s authors also accused it of not examining a vulnerability’s material consequences.

It also warned of inadequacies in the scoring algorithm as the CVSS evolved (version 3.0 was published in 2015). “Severity scores inflate over time, unrelated to community valuation of severity, both between versions 1 and 2 and versions 2 and 3,” it warned. So CVSS 3.0 is likely to rate severity higher than its predecessors.

This change in the vulnerability ratings helped to address some imbalances, Dave Dugal, co-chair of the CVSS-SIG explained. “Cross-site scripting vulnerabilities were woefully underscored in CVSS 2.0,” he said. He also warned that some of this might be down to vendor disclosure bias “where a CVE ID or external advisory is not pursued for vulnerabilities on the lower end of the CVSS spectrum. This could give the impression that most [public] vulnerabilities have relatively high scores along the intended CVSS normal curve,” he added.

CVSS scores are useful when it comes to understanding how nasty an individual bug is, but there’s a danger in relying only on a single number when analyzing risk. You must also understand factors such as the kind of data that the vulnerable system has access to and the impact that a successful attack on that system might produce.

This requires a more mature approach to vulnerability management. It takes a bigger investment in time and tools to help you understand not only the criticality of the systems in your infrastructure and the data that flows through them, but also the real-world implications of all those high-scoring software and firmware bugs.

What’s Hot on Infosecurity Magazine?