(Ping! Zine Issue 58) – If you’ve been following security news, you are aware of the alarming increase of DDoS attacks. No longer a theoretical threat, DDoS is rapidly becoming a very real concern for many website operators. Easily attainable DDoS tools and cheap rent-a-DDoS services significantly lowered the ‘entrance bar’ – widening the scope of attacker possibilities. As a result, now almost everyone can become a target, from government sites and large financial institutions, to the SMB community, e-commerce websites and even personal blogs.
One interesting example of just how arbitrary and damaging these attacks have become comes from this chat in IRC channels.
<zeiko> [03:06:45] what.cd is now being under DDoS attack until I get my invite.
<zeiko> [03:06:51] attack is set to 48hours, be back later I hope I’ll have my invite.
<zeiko> [03:07:15] yeah I want my account backj
<zeiko> [03:07:32] or what.cd will die”
This may seem like an empty treat but in reality, after being denied an access to an invite only BitTorrent community, this one disgruntled user was able to singlehandedly bring down a number of websites including What.cd, PassThePopcorn.me, Broadcasthe.net and several others – which went down due to a collateral damage to the hosting servers.
Last year we saw how the increasing commonness on such scenarios drove the demand for DDoS mitigation solutions and pushed anti-DDoS vendors to the limits of their creativity – giving birth to several good ideas. On the other hand, this also created a lot of options, ranging from on-premise appliances to cloud-based mitigation services.
The difference between all those options is often unclear and sometimes can be downright confusing. Choosing wisely requires knowledge of few evolving trends and understanding of general industry best practices.
Evolving DDoS Trends – The shape of things to come
Many DDoS attacks rely on bots, automated visitors that overbear the server resources, causing it to crash or slow down to a halt. To some extent these bots are designed to mimic human visitors – making it harder to weed them out without blocking legitimate traffic. Most anti-DDoS solutions deal with bots by presenting them with a set of challenges that help discover bots’ true identity, but in this game of cat-and-mouse, neither side can hold an advantage for long.
By process of reverse engineering, the attackers obtain the rules of the challenge and modify their bots to parse the challenge parameters accordingly. What’s interesting here is that such tactics will be usually designed to bypass bigger and more popular anti-DDoS providers, simply because there is more to gain by breaking through their defenses.
Botnets are clusters of compromised “zombie” machines, used for DDoS attack by remote operators. i.e. “Bot shepherds”.
Originally Botnets consisted from Trojan infected PCs, but recent evidence show a new trend, with increased usage of compromised web servers.
There are a few reasons why Bot shepherds prefer servers. For one, they are much more exposed, since practically every hosted website can be used for a server breach – meaning they are only as strong as their weakest link, which is usually not strong at all. Moreover, unlike home PCs, many servers do not benefit from routine Anti-Virus checks and the amount of data they hold makes the after-the-fact detection that much harder.
The second reason is the server damage potential. Due to their relative proximity to the Internet’s backbone and larger pipe sizes, the amount of requests each server can produce will be much higher than a single PC. This fact alone is enough to make servers a much more lucrative target.
A Recent study published by a security-in-the-cloud provider Incapsula, showed how a single backdoor, placed on an under-protected general interest website, turned a UK server into a DDoSing zombie machine used to send SYN flood attacks against US banks.
This is just one interesting example of the latest phenomena and part of the reason why DDoS attacks are getting larger and requiring more mitigation resources. This is why, when considering anti-DDoS providers, one should always take into account factors like scalability and proximity to backbone. Looking forward, we see that DDoS attacks will only get larger and you want to make sure that your provider is up for the challenge.
Basic Rules of DDoS Protection
Even when under attack, the anti-DDoS service should keep your website fully operational. Visitors should be able to access your site at all time, without delay, without being sent through holding areas or splash screens, and without receiving outdated cached content.
Visitors flocking to your site should never be shut out without possibility of redemption. At the very least, users should be able to shout out (via a complaint area), or redeem themselves by completing a CAPTCHA.
Strict Application Layer Protection
Application layer is an Achilles heel and your anti-DDoS solution must block all application layer bot requests. There is not a lot of head room for most sites – even 50 excess page views per second can take down your site, or slow it down to a halt. The transparency should not come at the expense of airtight protection.
Cost Effective Scalability
Attacks are getting bigger. Your service must be able to absorb arbitrary and massive – amounts of traffic. Service providers do this by building hundreds of gigabits in aggregate capacity, when possible. Appliance vendors deal with it by stacking and cloudifying their appliances. Inquire about scalability and find the path that is most cost-effective, for your needs.
Zero to None False Positives
An often-overlooked aspect of DDoS protection is that it actually involves two steps: detecting that you are under attack, and defending against it. Detection often gets overlooked, but it is the trickiest task. Nobody wants to accidentally activate DDoS defenses when not under attack. Yet, nobody wants to watch their site 24x7x365. Use services that do the monitoring for you.