Spamming and scanning botnets - is there something I can do to block them from my site?
This question keeps popping up on forums and all places popular with those beleaguer souls despondent of the random spamming and over filled logs from scanning. Although this isn't a Magic ball question answer does come out a: Maybe, Maybe not.
The reason behind the ambiguity is logical, to a degree; it’s easy trying to hinder, frustrate and reduce the effectiveness of automated botnet processes, like posting and scanning rather than stop them dead.
Why? Glad you asked.
Botnets tend to a number of systems located in random locations globally, you'll get some that are regional specific, but the majority we at the Internet Storm Center (ISC) see are global in distribution. So unless you can pick out only the six IP addresses you completely trust as good*, you’re accessible to every system on the planet that has an internet link.
First and foremost you need to look at your logs find those non-human attacks or blog spamming posts. We keep saying here at the ISC you need to understand your logs. If you don’t understand what you’re seeing research it or writing in to us. It doesn’t take too long to be able to work out a real human interaction against an automated non-human one. Have a look at one our recent posts [1] to see the types of patterns automated processes leave behind in logs.
Let say you are now at one with your logs files, so what next? From a random reader's submission for the bots they logged I did a little Excel shuffling, then some IP geo-locationing followed by more Excel-ing, finally braking the IP addressed down to which country they came from. The results were interesting as Spain has the highest set of bad IPs (13%), follow by: Argentina (9%), Italy(8%), Colombia (5%), United States (5%), United Kingdom (4%), Mexico (4%), Romania (4%) and Germany (4%).
So what can we divine from these random statistics? First we can acknowledge this is botnet has a significant portion of it bots in Europe, second the next biggest group is in South America, leave the United States well out of the picture. Yeah so what, I hear you yell. Now go back on look at the locations your human visitors came from. With a simple bit of review, you’ll be able to work out you never see visitors, say from South America and New Zealand IP address ranages.
Now you can make the determination to black list (deny) net blocks in those countries from very be able to access your web site or blog. On the flip side you could block everything and white list (allow) certain countries. Or go crazy and play wack-a-mole by adding in every single bad IP address to a block list. It’s up to you.
The point of this piece is look at your logs, understand your visitors, work out who actually needs to get to your site and block out the rest if the now constant automated scans annoy you.
Remember Dshield loves logs [2] and Handlers love weird logs and packets, so start off your New Year by looking at your logs and sending in anything wild, crazy or that just seems plain odd to us at the Storm Center [3]. You’ll learn some new and might help someone who's been puzzling over the same questions you’re looking at now.
[1] https://isc.sans.edu/diary/Massive+PHP+RFI+scans/17387
[2] https://isc.sans.edu/howto.html
[3] https://isc.sans.edu/contact.html#contact-form
* This kinda of breaks the Internet model and takes us back to the good ol’day of having host file to resolve stuff on the ‘Net
Chris Mohan --- Internet Storm Center Handler on Duty |
Chris 105 Posts ISC Handler |
Subscribe |
Jan 14th 2014 5 years ago |
One other way you can block forum/website spammers if you use PHP.
http://wiki.stopabuseonline.org//tiki-index.php?page=PHP_Block_TOR_POST You can download known forum spammer lists from Stop Forum Spam instead of the TOR lists posted on the wiki article. |
@Miss_Sudo 12 Posts |
Quote |
Jan 14th 2014 5 years ago |
TCP scanning? Set up a TCP tarpit like Tom Liston's LaBrea. It can also be used to trap spammers.
It may not help you, but it will reduce the overall impact of the scanning. If enough people tarpitted vulnerable service ports that they do not publicly provide the attraction and impact of scanning would be greatly reduced. |
John Hardin 62 Posts |
Quote |
Jan 14th 2014 5 years ago |
There are some good blacklists out there which are fairly well maintained and can be used to fend off this sort of thing. Emerging Threats (http://rules.emergingthreatspro.com/) has rulesets that cover botnet C&C hosts, known compromised hosts, etc. OpenBL (http://www.openbl.org/lists.html) covers SSH, FTP, and email scans. The Composite Blocking List (http://cbl.abuseat.org/) can be used for non-SMTP-reject mitigation so long as you abide by their rules, #7 in particular. And of course there are lists like Chris mentions, where you can blackhole entire countries.
Your ability to implement various levels of blocking based on these lists will naturally depend on your client/employer. A US-based real estate firm with 3,000 users may be a great site to employ all of these tactics, whereas a law firm with overseas ties may not be willing to risk lost contact from a potential visitor even if they're in Romania. Part of our responsibility is, of course, evaluating each situation and applying rules that help to secure our networks while making sure that business needs are still met. |
parseword 9 Posts |
Quote |
Jan 14th 2014 5 years ago |
If a Cisco ASA is present in front of the web server
some simple filters can be written to block the noisiest/dumbest bots. Note this doesn't work for HTTPS, but much bot traffic is HTTP. Here are rules that block a moronic scanner looking for residential router backdoors that was flooding the log here: regex uri01 "^/+HNAP1" regex meth01 "^[^A-Za-z]" class-map type regex match-any uri_reject_list match regex uri01 class-map type inspect http match-any http_evil match request method regex meth01 match request uri regex class uri_reject_list policy-map type inspect http http_policy parameters protocol-violation action drop-connection class http_evil drop-connection The above can be expanded in numerous ways to filter a great deal of junk requests. One gotcha is that certain changes cause the ASA to shut down port 80 and reject all requests until the policy map rule is removed and re-added. |
Starlight 34 Posts |
Quote |
Jan 14th 2014 5 years ago |
Sign Up for Free or Log In to start participating in the conversation!