(Ping! Zine Issue 42) – In the last year there have been a number of organizations offering rewards, or ‘bounty’ programs, for discovering and reporting bugs in applications. Mozilla currently offers up to $3,000 for crucial or high bug identification, Google pays out $1,337 for flaws in its software and Deutsche Post is currently sifting through applications from ‘ethical’ hackers to approve teams who will go head-to-head and compete for its Security Cup in October. The winning team can hold aloft the trophy if they find vulnerabilities in its new online secure messaging service – that’s comforting to current users. So, are these incentives the best way to make sure your applications are secure?
I’d argue that these sorts of schemes are nothing short of a publicity stunt and in fact, can be potentially dangerous to an end users security.
One concern is that, by inviting hackers to trawl all over a new application prior to its launch, just grants them more time to interrogate it and identify weaknesses which they may decide is more valuable if kept to themselves. Once the first big announcement is made detailing who has purchased the application, with where and when the product is to go live, the hacker can use this insight to breach the system and steal the corporate jewels.
A further worry is that, while on the surface it may seem that these companies are being open and honest, if a serious security flaw were identified would they raise the alarm and warn people? It’s my belief that they’d fix it quietly, release a patch and hope no one hears about it. The hacker would happily claim the reward, promise a vow of silence and then ‘sell’ the details on the black market leaving any user, while the patch is being developed or if they fail to install the update, with a great big security void in their defenses just waiting to be exploited.
Sometimes it’s not even a flaw in the software that can cause problems. If an attack is launched against the application, causing it to fail and reboot, then this denial of service (DOS) attack can be just as costly to your organization as if the application were breached and data stolen.
A final word of warning is that, even if the application isn’t hacked today, it doesn’t mean that tomorrow they’re not going to be able to breach it. Windows Vista is one such example. Microsoft originally hailed it as ‘it’s most secure operating system they’d ever made’ and we all know what happened next.
A proactive approach to security
IT’s never infallible and for this reason penetration testing is often heralded as the hero of the hour. That said technology has moved on and, while still valid in certain circumstances, historical penetration testing techniques are often limited in their effectiveness. Let me explain – a traditional test is executed from outside the network perimeter with the tester seeking applications to attack. However, as these assaults are all from a single IP address, intelligent security software will recognize this behavior as the IP doesn’t change. Within the first two or three attempts the source address is blacklisted or fire walled and all subsequent traffic is immaterial as all activities are seen and treated as malicious.
An intelligent proactive approach to security
There isn’t one single piece of advice that is the answer to all your prayers. Instead you need two and both need to be conducted simultaneously if your network’s to perform in perfect harmony: Application testing combined with intrusion detection.
The reason I advocate application testing is, if you have an application that’s public facing, and it were compromised, the financial impact to the organization could potentially be fatal. There are technologies available that can test your device or application with a barrage of millions upon millions of iterations, using different broken or mutated protocols and techniques, in an effort to crash the system. If a hacker were to do this, and caused it to fall over or reboot, this denial of service could be at best, embarrassing.
Intrusion detection, capable of spotting zero day exploits, must be deployed to audit and test the recognition and response capabilities of your corporate security defenses. It will substantiate that, not only is the network security deployed and configured correctly, but that it’s capable of protecting the application that you’re about to make live or have already launched irrespective of what the service it supports is – be it email, a web service, anything. The device looks for characteristics in behavior to determine if an incoming request to the product or service is likely to be good and valid or if it’s indicative of malicious behavior. This provides not only reassurance, but all important proof, that the network security is capable of identifying and mitigating the latest threats and security evasion techniques.
While we wait with baited breath to see who will lift Deutsche Post’s Security Cup we mustn’t lose sight of our own challenges. My best advice would be that, instead of waiting for the outcome and relying on others to keep you informed of vulnerabilities in your applications, you must regularly inspect your defenses to make sure they’re standing strong with no chinks. If you don’t the bounty may as well be on your head.
Writer’s Bio: Anthony Haywood is currently the Chief technology Officer for Idappcom and is guiding its future development of advanced network based security auditing and testing technologies as well as assisting organisations to achieve the highest levels of network threat detection and mitigation. For more information, visit www.idappcom.com