Schrodinger’s vulnerability - Using Exploitability to Avoid Chasing Phantom Risk

Written by

I recently laid eyes on a pen test report which had the gravest of warnings: ‘The host may be vulnerable to remote code execution’. Dear Lord, did they get system access on a host? Nope. Was there a public exploit available for that version of software that enabled remote code execution? No again. Well why would someone make such a vague alarmist recommendation?

When I queried this, their logic was that even though there was no public exploit available for that version of software, someone somewhere might have developed one but was keeping it secret. Also, since it’s a secret exploit that no-one knows about, it could also be remote code execution because that’s the most common exploit right?

This is a tongue in cheek analysis of what has reached critical mass in the pen testing industry and is now dubbed ‘Pen tester syndrome’, the act of making things worse than they appear. You are now delivered reports full of junk risk without any kind of proof of concept with far-fetched contrived scenarios that will never occur (and have never befallen any company at all).

Among other things this has led to the rise of crowdsourced security, with many of the world’s biggest brands ditching pen testing entirely – as it only delivers actionable vulnerabilities with proof of concept due to the nature of their reward models (researchers are only paid if they can exploit a working vulnerability and deliver a proof of concept).

Back to the original issue; is out of date software automatically vulnerable? Hardly. Many software version upgrades stem from functionality changes, not security updates. Even those that are for security reasons are for patching specific flaws in the code, or a readily available public exploit.

When you trawl through an exploit database, the exploits often refer to very specific vectors that can only be delivered if the configuration of the asset in question is of a particular kind. Many of them require some kind of privileged access already and as I alluded to earlier, remote code execution is exceedingly rare.

This brings us to Schrodinger’s vulnerability, a play on the oft used trope of Schrodinger’s cat, which to paraphrase implies that until you look in the box, the cat is both alive and dead. A more contemporary reference would be the response that former Secretary of Defense Donald Rumsfeld once blurted out in reference to ‘known knowns’, ‘known unknowns’ and ‘unknown unknowns’ with the latter being the riskiest.

Let’s map this to an information asset today and call-back the alarmist reference I started this article with on. This information asset that is out of date but might have a vulnerability even there are none publicly available is going into the region of ‘unknown unknowns’. We know there are no publicly available vulnerabilities but there may be a vulnerability that exists that we just don’t know about.

So how probable is this? Fortunately there’s no need to speculate since there’s plenty of research to draw conclusions from. ‘Zero days, Thousands of Nights: The Life and Times of Zero-Day vulnerabilities and their Exploits’ is a piece of research by Lillian Ablon and Andy Bogart that focuses on this very issue. They found that if a zero-day existed and was hoarded by an entity but kept from public view, it would stay that way for an average of seven years. 

What this means for us is that regardless of what version of software you are on, there may be a zero-day that exists (however improbable) but no-one will know about it and it will stay that way for an average of seven years. What's worse is that if you update your software to the latest version, then that version too may also contain this zero-day, even though you are ‘fully patched’, simply because the code refactoring in the new version has not taken into the account the zero-day by virtue of the fact that it’s still an unknown.

The research does make a distinction for end of life software, since this will never be patched again, so if a new zero-day is discovered then it effectively becomes ‘immortal’ since the vendor will never release a new patch to cover this.

Using exploitability for defense
Combining a few approaches can stave off junk risk and avoid you chasing contrived scenarios that will never materialize:

  • Switch from pen testing to crowdsourced security for external assets: Pen testing methodology is starting to be considered a legacy approach to offensive security testing. It does not emulate a hacker in any way – it only gives you a frozen snapshot of security posture at a specific point in time, nothing more. Crucially, crowdsourced also gives you actionable threats with proof of concept and their methodology maps more realistically to how attackers behave (for example, no time limit on testing), while pen testing focuses on theoretical threats.
  • Having out of date software doesn’t mean you’re automatically vulnerable! While this may shock some individuals, if the specific threat vector that your version of software is vulnerable to isn’t exposed in its current configuration, then you are safe.
  • Practically all attacks focus on known vulnerabilities so updating your software to the latest version to protect against ‘zero-day’ attacks is irrelevant. The new version is as likely to be vulnerable since no code has been refactored to account for the zero-day, hence it’s unknown status. Updating software is for known threats, not unknown ones.
  • Even if you are exposed to a vulnerability, what are the steps needed for it to materialize. The likelihood of many vulnerabilities drops to almost zero once you factor in the first two variables required: someone has to want to hurt you and someone has the skill level to exploit that vulnerability. The former is more common than the latter, as an offensive security skillset is still so rare nowadays even in professionals who work within information security.

What’s hot on Infosecurity Magazine?