How Machine Learning Can Improve Risk Management

Written by

If an organization has any IT infrastructure at all, it faces far more vulnerabilities than its security team can address. That’s a basic fact of enterprise cybersecurity.

But there is reason for hope: only a tiny fraction of vulnerabilities pose a risk to the organization, and some are riskier than others. In an environment where professionals cannot remediate everything, it’s incredibly important that cybersecurity executives identify the riskiest vulnerabilities.

Machine learning is ideally suited to tackle this challenge. There’s quite a large amount of data that is created by vulnerability scanners, asset management systems, SIEMs, intrusion detection systems, and other tools. Machine learning can analyze this data to develop models for how these factors interact with one another. Ground truth data can be used to identify which of these features have the greatest influence on the model. By operating beyond the capacity of prior manual methods, it can be applied to prioritize the riskiest vulnerabilities, and even predict which newly found vulnerabilities are likely to be weaponized or exploited.

Further, a multi-stage approach to training and fine-tuning models means obviously non-predictive variables can be eliminated quickly and easily. This means companies are no longer evaluating vulnerabilities in a vacuum, and security teams have the ability to take into account the context of the vulnerability as well as its asset and environment. The capability has spurred remarkable changes in how organizations address vulnerabilities, how they allot resources for vulnerability management, and how they report risk to their organization.

The situation without machine learning

Traditionally, enterprise vulnerability management programs sit somewhere on a spectrum of intuition to alchemy. At best, they set out to patch every vulnerability above a certain threshold. Some organizations, for example, will try to patch everything with a CVSS score of seven or above. At their worst, IT professionals compile huge spreadsheets of vulnerabilities that their scanners have uncovered, and then squabble over which ones to patch based on their own opinions and industry folklore. The CEO saw one vulnerability with a logo on the news, so that gets patched. The CFO is drawing up budgets, so that department’s security gaps are addressed.

Even with poor methods for assessing risk, capacity isn’t nearly enough. A typical organization, no matter how big or how small, can patch just one in ten vulnerabilities. That limited capacity, however, does not mean that companies cannot make meaningful strides in reducing risk if they take a data-driven approach to remediation.

In short, they need to examine the paths that hackers have taken previously.

Exploiting past patterns to improve security

Hackers follow well-worn paths, even if those paths are incredibly complicated. They tend to use existing exploits to probe organizations’ networks, and they tend to look for vulnerabilities in systems that are rarely patched.

All of this generates a lot of data. We know, for example, that of the ten largest software vendors, three are responsible for 70 percent of vulnerabilities. Yet only five percent of known vulnerabilities have published exploit code associated with them.

The data consumed by cybersecurity teams provides a roadmap for which vulnerabilities can be prioritized for remediation. That same data can give organizations a measure of overall security risk. If you know which vulnerabilities are most likely to be exploited and have the biggest impact if exploited, you likely have a good idea of the risk posed by every vulnerability scanners’ findings. From there, an organization can measure, in a meaningful way, the impact of its vulnerability management programs, and can also decide whether resources are allocated properly.

Security teams face an impossible task when asked to manually interpret and prioritize every vulnerability in their infrastructure and applications. The sheer scale and number of these security holes is too large. Computing power, automation, and machine learning are the best means for keeping up with this challenge.

Ed Bellis is a founder and chief technology officer of Kenna Security, which offers a data-driven, risk-based approach to vulnerability management. He has extensive experience as a cybersecurity professional, and has previously served as CISO at Orbitz and a former vice-president of information security at Bank of America.

What’s hot on Infosecurity Magazine?