Comment: Rewards for Hacking – Good, Bad or Ugly?

Haywood takes issue with companies that offer a reward, or ‘bounty’ program, to those who discover application security flaws
Haywood takes issue with companies that offer a reward, or ‘bounty’ program, to those who discover application security flaws

When you read about an organization offering a reward, or ‘bounty’ program, to hackers able to ‘break’ an application, what are your immediate thoughts? For some of you it might be ‘I’ll have a piece of that pie’. Others might think that its one way of making sure the application is secure. But how many of you think ‘that’s off the shortlist then’?

I strongly believe these sorts of schemes are nothing short of a publicity stunt and, in fact, can be potentially dangerous to an end users’ security. Why do I say that? Well let’s start at the beginning…

A number of organizations continue to offer bounty schemes for discovering and reporting bugs in applications. Mozilla currently pays out up to $3000 for critical bug identification, while Google rewards $1337 for flaws identified in its software.

A rather interesting alternative is Deutsche Post, which launched its own Security Cup. It sifted through applications from ‘ethical’ hackers to select teams to compete – worryingly its website states when bugs are found they must be reported, rather than if. The rules further clarify that the teams are not allowed to touch public data and that they must delay publicly disclosing any bugs they identify until after the contest has ended – I’m sure that’s very comforting to its current users!

Here Are the Results, In No Particular Order

While on the surface it may seem that these companies are being open and honest, what if a serious security flaw was identified? Can we trust them to raise the alarm and warn people? I personally think they’ll fix it quietly, release a patch that nobody realizes the urgency for, and hope nobody hears about it.

Do we think the hacker would claim the reward, promise never to say a word, but then sell the details on the black market? My money says yes. Where does that leave the users? Vulnerable while the patch is being developed and, if they fail to install the update, with a great big security void in their defenses just waiting to be exploited.

Another concern is that, by inviting hackers to trawl all over a new application prior to its launch, it just gives them additional time to interrogate it and identify weaknesses. What about if the flaw would net a higher profit than the bounty – then there’s nothing to stop hackers from keeping it to themselves. Once the first big customer win is announced, with where and when the product is to go live, the hacker bides their time and then slips in and out with the prized information.

To be honest, it doesn’t even need to be a flaw in the software to cause a problem. If a denial of service (DOS) attack is launched against the application, causing it to fail and reboot, it can be just as costly to an organization than as if the application were breached and data stolen.

A final word of warning is that, even if the application is safe today, that doesn’t mean tomorrow it won’t be breached. Windows Vista is testament to that – Microsoft originally hailed it as its most secure operating system it had ever made, and we all know how that turned out.

Let’s Get Proactive

IT is never infallible, and while penetration testing is often heralded as the hero of the hour, historical penetration testing techniques are often limited in their effectiveness. There are reasons why I make this claim.

A traditional test is conducted from outside the network perimeter, with the tester looking for applications to attack. These assaults, however, are typically from a single IP address, which doesn’t change. Within the first two or three attempts, intelligent security software will recognize the source address; all subsequent traffic is immaterial – as it’s treated as malicious – and blocked, making it appear that networks are protected. A hacker doesn’t play be the rules and will utilize hundreds, if not millions, of addresses for this very reason.

So, Let’s Do It Intelligently

If you were looking for the halleluiah moment, then I’m sorry. There actually isn’t one single piece of advice that is the answer to all your prayers. Instead there’s two – and both need to be conducted simultaneously if your networks are to perform in perfect harmony: intrusion detection plus application testing.

Intrusion detection, capable of spotting zero-day exploits, must be deployed to audit and test the recognition and response capabilities of corporate security defenses. It will substantiate that, not only is the network security deployed and configured correctly, but that it’s capable of protecting the application that you’re about to make live or have already launched, irrespective of the service it supports – be it email, a web service, anything.

The device looks for characteristics in behavior to determine if an incoming request to the product or service is likely to be good and valid, or if it’s indicative of malicious behavior. This provides not only reassurance, but all-important proof that the network security is capable of identifying and mitigating the latest threats and security evasion techniques.

But you also need application testing because, if you have an application that’s public facing and it were compromised, the financial impact to the organization could potentially be fatal. There are technologies available that can test devices or applications with a barrage of millions upon millions of iterations, using different broken or mutated protocols and techniques, in an effort to crash the system. If a hacker were to do this, and caused it to fall over or reboot, this denial of service could be, at best, embarrassing but, at worst, highly detrimental to an organization.

Come on people, as the old cliché goes, if you want a job done properly you’re better off doing it yourself. While some will be waiting for news from Germany of who is awarded Deutsche Post’s Security Cup, we must not lose sight of our own challenges. If there are vulnerabilities in your applications, then can you really afford to wait until someone else tells you about them? You must regularly inspect your defenses to make sure they’re standing strong, with no chinks, or go down in a blaze of bullets.


Anthony Haywood’s computing history began during the first computer revolution in the early 1980s, writing programs for the Sinclair ZX80 and the Texas Instruments TI99/4A. During the early 1990s, he worked for Microsoft in a cluster four team supporting their emerging products. Shortly after the release of Windows 95, Haywood was invited to join NetManage – a highly successful Silicon Valley-based company providing internet technologies. In 2002, Haywood founded his first network security company, Blade Software, pioneering the development of the ground-breaking “stack-less” network security assessment and auditing technology. In 2004, Haywood founded his second network security company, Karalon. It was during this time that Haywood developed a new network based security auditing and assessment technology with the aim of providing a system and methodology for auditing the capabilities of network based security devices, with the ability to apply “security rules” to fine-tune intrusion detection and prevention systems. The year 2009 saw Haywood join forces with Idappcom, where he is currently the company’s CTO.

What’s hot on Infosecurity Magazine?