Is $50,000 for a Vulnerability Too Much?

Written by

Zoom has recently increased its maximum payouts for vulnerabilities to $50,000 USD as part of its crowdsourced security program. Such lofty figures make great headlines, attract new talent in search of the big bucks and begs the question – how much is a vulnerability worth?

I previously found several bugs in the Zoom products, although these now date back several years when their crowdsourced security program was a fledgling enterprise. Three of them had already been found by others before me – what we call a ‘duplicate’ in crowdsourced security – meaning you get no reward for your time or effort even though it’s a valid bug. The fourth vulnerability was actually quite interesting since it re-appeared at the start of the pandemic when Zoom was under increased usage (and increased scrutiny) - I labelled the vulnerability “Potentially unsafe URI's can cause Local file inclusion, command injection or remote connection” which is exactly what it did. To summarise, you could effectively send URL’s that would appear as links to someone else you were chatting with and these could do various things like open up malicious websites, download files or even run commands on their system (bizarrely it even worked with the gopher:// protocol). The vulnerability that re-appeared at the beginning of 2020 was identical and focused on UNC paths so that you could send NTLM credentials to an attacker’s domain.

For this vulnerability I was paid the princely sum of $50 USD, and not straight away mind you – it took about six months for the vulnerability to work its way to the powers that be. Two years later I received a message saying it had been fixed, and could I spend my free time checking their fix (I didn’t). These kinds of figures are common in crowdsourced security where lavish payouts distract from the real problem – should we really be paying out $50,000 USD for a vulnerability?

These kinds of sums are not new of course. Zoom zero-days at the height of the pandemic were reportedly being flogged for $500,000 USD for the windows app, and companies like ‘Zerodium’ frequently traffic in these kinds of vulnerabilities in the ‘grey market’ area of vulnerability transactions.

Back to the question at hand, there are many downsides to ever increasing payouts in crowdsourced security programs. While the main aim is to increase interest in the program (Remember, crowdsourced programs rely on Orwellian gig economy where you work for free unless you find a valid bug) they also have a counter-productive effect of cannibalising talent away from legitimate security areas, salaried or otherwise. The cost effectiveness of paying to fix vulnerabilities when they are live is also in play. $50,000 USD could easily be spent fixing root causes of vulnerabilities and ‘shifting-left’ to pay for far more than a single unitary fix. Some obvious examples of what that money could be used for:

  • A full time application security engineer
  • Anywhere between 10–20 pen tests or code reviews (depending on day rate)
  • A full suite of automated pen testing software
  • Full deployment and implementation of SAST software (code scanning/dependencies) across upwards of 10 million lines of code.
  • Train hundreds of devs in secure coding techniques

Any of the above would spot issues raised in crowdsourced programs long before they ever made it to a live environment, and at a far cheaper cost. While the argument that the bug reward offsets the monetary impact the eventual vulnerability exploit would cause, the counter to that is that if a shift-left approach is taken, the offset is multiplied by a factor of ten. If your SAST or application security engineers or even your code reviews spot ten of these vulnerabilities before going live, you’ve also mitigated the additional cost of refactoring code and pushing a single build to fix a single vulnerability.

This is why crowdsourced security is scaling for a problem which always chases the symptom, not the cause, and for which offering ever increasing rewards won’t alleviate the structural issues they are trying to solve – good application security hygiene. While the same could be said for the alphabet soup of acronyms currently present in the cybersecurity technology space (think IAM, WAF, DAST, SIEM, etc.)  many technologies are simply band-aids over what a comprehensive application security pipeline would resolve.

Paying five figure rewards for single vulnerabilities won’t suddenly mean you have better security, and when deciding how much to pay for a vulnerability, if the question becomes ‘is this too much for a vulnerability?’ then you should be asking yourself ‘am I shifting left enough?’

What’s hot on Infosecurity Magazine?