Defending Against Web Server Denial of Service Attacks

Published: 2013-07-27
Last Updated: 2013-07-28 02:44:48 UTC
by Scott Fendley (Version: 1)
5 comment(s)

Earlier this weekend, one of readers reported in an odd attack toward an Apache web server that he supports.   The server was getting pounded with port 80 requests like the excerpt below.   This attack had been ramping up since the 21st of July, but the "owners" of the server only detected problems with website accessibility today.  They contacted the server support staff who attempted to block the attack by scripting  a search for the particular user agent string and then dropping the IP address into iptables rules.  One big problem though.  The attack was originating from upwards of 4 million IP addresses across the past several days and about 40k each hour.  That is a significant amount of iptables rules in the chain and is generally unmanageable.  

The last ditch effort was to utilize mod_security to stop and drop anyone utilizing the user agent.  Unfortunately, a small percentage of customers may getting blocked by this effort to contain the problem.  With this implemented, the server is usable again, or until the attackers change the modus operandi.

It appears that the botnet of the day was targeting this domain for reasons that we do not really understand.  Our reader wanted to share this information as a way to help others defend against this type of activity in the future.  It is quite likely that others out there may be under attack, or will be under attack in the future.  

I would encourage our readers to think about how you would counteract an attack of this scale on your web severs.  This would be a good scenario to train and practice within your security organization and server support teams.  If you have other novel ideas of how to defend again this type of attack, please comment on this diary.


Sample of DoS attack traffic (only 7 lines of literally 4 million log lines in the past few days)
A,B,120.152 - - [21/Jul/2013:02:53:42 +0000] "POST /?CtrlFunc_DDDDDEEEEEEEFFFFFFFGGGGGGGHHHH HTTP/1.1" 404 9219 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
C.D.3.168 - - [21/Jul/2013:02:53:43 +0000] "POST /?CtrlFunc_yyyzzzzzzzzzz00000000001111111 HTTP/1.1" 404 9213 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
E.F.67.90 - - [21/Jul/2013:02:53:44 +0000] "POST /?CtrlFunc_FFFGGGGGGGGGGGGGGGGGGGGGGHHHHH HTTP/1.1" 404 9209 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
G.H.76.206 - - [21/Jul/2013:02:53:45 +0000] "POST /?CtrlFunc_iOeOOkzUEV8cUMTiqhZZCwwQBvH9Ot HTTP/1.0" 404 9136 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
I.J.21.174 - - [21/Jul/2013:02:53:45 +0000] "POST / HTTP/1.1" 200 34778 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
K.L.57.51 - - [21/Jul/2013:02:53:45 +0000] "POST / HTTP/1.1" 200 34796 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
M.N.29.143 - - [21/Jul/2013:02:53:46 +0000] "POST /?CtrlFunc_ooppppppppppqqqqqqqqqqrrrrrrrr HTTP/1.1" 404 9213 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"

mod_security rule:
SecRule REQUEST_HEADERS:User-Agent "^Mozilla/4.0 \(compatible; MSIE 6.0; Windows NT 5.1; SV1\)$" "log,drop,phase:1,msg:'Brute Force Attack Dropped'"


 

5 comment(s)

Comments

Seems that the attack is POST to "/" (with maybe ?query).
Maybe lines like
RewriteCond %{REQUEST_METHOD} ^POST
RewriteRule ^/$ - [F]
RewriteCond %{REQUEST_METHOD} ^POST
RewriteRule ^/\?.* - [F]
in Apache configs maybe sites-enabled/* would help.
Great diary entry as this type of attack is challenging to defend. I am the ModSecurity project lead so I wanted to give another rule option -

SecRule REQUEST_METHOD "@streq POST" "chain,log,drop,phase:1,msg:'Brute Force Attack Dropped"
SecRule REQUEST_URI "@beginsWith /?CtrlFunc_"

Additionally, the OWASP ModSecurity CRS project has a DoS protection ruleset based on request bursts -
https://github.com/SpiderLabs/owasp-modsecurity-crs/blob/master/experimental_rules/modsecurity_crs_11_dos_protection.conf

At SpiderLabs we have also seen where botnet clients (or opt-in DoS clients) are utilizing "Slow" connection consumption attacks -
http://blog.spiderlabs.com/2012/01/modsecurity-advanced-topic-of-the-week-mitigation-of-slow-read-denial-of-service-attack.html

Re: IPTables - although the sheer number of client IPs can become unmanageable, using something like the LaBrea tarpit patch for iptables is good because it will force infected botnet clients to stop sending layer 7 attack traffic. This will take load off the web server/ModSecurity from having to play whack-a-mole.
http://www.symantec.com/connect/articles/slow-down-internet-worms-tarpits

Lastly - if the size of the botnet is large enough there may be nothing an SMB can do... This is why orgs should consider utilizing a CDN such as Akamai or CloudFlare.

<shameless_plug>
I discuss many of these issues in my new book -
http://www.wiley.com/WileyCDA/WileyTitle/productCd-1118362187.html
</shameless_plug>
Great advice, particularly about what to do when you can't handle it yourself. I always liked ModSecurity and have used it for years.

A friend's company was facing an imminent DDoS issue and we scrambled to get a DDoS mitigation provider lined up. The biggies had ridiculous lead times to get something in place, 30+ days, OR my friend could have paid many, many thousands of dollars to get expedited. I think Akamai is great for large companies but maybe not for SMB. Their cost would have more that quadrupled what my friend was paying for web design and hosting. CloudFlare had perceived reputational issues (because of some of their highly publicized customers like WikiLeaks) so they were off the table as well.

They ended up picking Incapsula from Imperva because they were able to get the customer setup in less than twelve hours from the initial contact, had reasonable prices and most importantly provided a web app firewall capability 24x7. Since that setup they have had alerts for application resource exhaustion attacks and numerous remote file include and SQL injection attacks that were blocked by the WAF, incidents that proved they needed the new service. Their own web server now only accepts connections from the Incapsula data centers so their logs are a lot cleaner now and no one can bypass the WAF. The WAF is pretty basic but has the advantage of Imperva's WAF reputation and the insight into what the other Incapsula customers are seeing.

It's always best to have a service lined up ahead of the need. If you're looking at a pure DDoS mitigation provider, they may work by staying completely out of the way until an attack is detected. Then they swing your traffic to their "cleansing centers" until its over. Personally I find much more value in the "always on" in-the-cloud WAF providers.
I'm not sure if this is meant to be a DoS, or maybe a bruteforce attack on session data of some application.

The attackers have made it really easy to block this with a pattern match:

RewriteEngine On
RewriteCond %{QUERY_STRING} ^CtrlFunc
RewriteRule .* - [L,R=403]

For some URIs, a POST is invalid so you could return 405 Method Not Allowed:

RewriteEngine On
RewriteCond %{REQUEST_METHOD} ^POST$
RewriteRule ^/$ - [L,R=405]

I see in the log excerpts these were resulting in a 404. I would consider ditching any complex, scripted 403/404/405 handlers in favour of a static page (otherwise the performance hit may be just as severe, or worse).

If the number/rate of connections is sufficiently high, no amount of filtering on the query is going to help, because Apache resources are exhausted just handling the request and processing htaccess and rewrite rules. At some point a reverse proxy setup is needed.

If your site has mostly static content, the most important thing may be that customers can still see the homepage or other areas of the site. If the reverse proxy is allowed to cache those pages, it may not matter much if Apache in the backend is overloaded sometimes.

Reverse proxies that are lightweight (like Nginx or many others), can apply filtering on request method, query strings or user agent more efficiently and with higher numbers of connections, before handing off genuine queries to the Apache backend where more resources will be consumed. These can even be on different (or multiple) servers if you're approaching hardware or bandwidth limits and need to scale up.

Finally, where query strings are known not to be needed for some URI, you could have the reverse proxy strip it out ('normalise' the URI), either serving a redirect, or instead serving the plain, cached homepage. Nginx's Naxsi module sounds maybe suitable - it is designed to automate this somewhat - learning what are valid query strings for your web applications and filtering anything else.
Hi Guys,

Great post - we're experincing exactly the same problem. I've installed mod_security and I've tried adding the following rules:

SecRule REQUEST_METHOD "@streq POST" "chain,log,drop,phase:1,msg:'Brute Force Attack Dropped"
SecRule REQUEST_URI "@beginsWith /?CtrlFunc_"

But apache doesn't seem to restart - I noticed a missing ' at the end of the first line but that doesn't seem to be the problem. Any ideas, on what is wrong with the above rules, would be great if this could fix our problem.

Cheers

Diary Archives