Are Your Hunting Rules Still Working?

Published: 2018-06-21
Last Updated: 2018-06-21 12:00:04 UTC
by Xavier Mertens (Version: 1)
2 comment(s)

You are working in an organization which implemented good security practices: log events are collected then indexed by a nice powerful tool. The next step is usually to enrich this (huge) amount of data with external sources. You collect IOC's, you get feeds from OSINT. Good! You start to create many reports and rules to be notified when something weird is happening. Everybody agrees on the fact that receiving too many alerts is bad and people won't get their attention to them if they are constantly flooded. So, you fine-tuned your rules to get a correct amount of alerts with a low (read: acceptable) rate of false positives. But are your rules still relevant or properly implemented? Is it normal to never get a notification?

In physical security, it is mandatory to perform a regular test of fire alarms in big buildings. Here, in Belgium, it is usually scheduled at noon every first Thursday of the month. And what about infosec? I met a C-level who was always asking:

"Hey Guys, anything suspicious detected on our infrastructure?"
"Nothing special, we are fine!"
"Hmmm, maybe I should be scared of this answer!"

As our infrastructures are quickly changing, it could be a good idea to implement the same kind of regular check-up to trigger your hunting rules. Are you looking for suspicious DNS traffic, then schedule a monthly DNS resolution of a pre-defined FQDN. It must trigger your rule. If it's not the case, you're in trouble and you're maybe missing real suspicious traffic.

We hate to be flooded with false positives but never get one is even more suspicious! And it keeps your security analysts awake! 

Did you implement suck controls? Feel free to share!

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

2 comment(s)

Comments

I implemented a few nagios checks to make sure the security infrastructure was at least seeing traffic. So a nagios check to watch the sniffer interfaces on a snort sensor, for instance, but checking to make sure the average bandwidth was AT LEAST some rate rather than throwing an alert if it exceeded some rate. :-) As for checking the log servers, the dashboards and canned queries were setup to NOT filter out our vulnerability scanners. So any time I kicked one off to scan a new system being deployed to a DMZ segment or to re-scan an internal server net (for instance), the intrusion detection systems would wave a red flag and the log servers would log it and we'd see it. It was easy in kibana to then exclude that scan. Crude, but at least it meant that we would periodically confirm that the intrusion sensors were actually detecting stuff and the log servers were dutifully logging it.

I'd toyed with the idea of making a nagios check that would occasionally wget a virus test file (or something that would predictably cause an alert to fire) and then check the logs for that event. 'Never got around to it though (and we were recently acquired and the stuff I setup is gradually being torn down). But that would be an easy nagios plugin to make...
I will continue to solidly preach white listing and backups for days.

It's so easy to catch wonky traffic if you only white list...

Sucks for those things such as Facebook and the other internet heavy apps that change ip addresses, but still...

Diary Archives