Unauthorized Change Detected!

Published: 2016-10-08
Last Updated: 2016-10-08 13:33:23 UTC
by Russell Eubanks (Version: 1)
6 comment(s)

How do you detect what has changed in your environment? Is it possible to think beyond the alerts you get from your tools and consider what changes that you absolutely need to know about when they occur? When systems in your environment move from “normal" to "abnormal", would you even notice?
 
Occasionally I have a credit card transaction denied. The most common reason for this is being in a part of the country that is outside my normal travel and spending patterns. When that happens, the panic quickly subsides and I recognize that something in my baseline has changed.
 
How can pattern and trend analysis apply in monitoring and defending your networks? Consider developing a similar baseline to detect possible unauthorized changes. This practice may very well help you detect changes that occur that do not follow the proper change control process and also give you deeper insight into the activities on your network. A practical step of creating a monthly calendar appointment named “What is missing from my baseline?” would help remind you to answer this question on an recurring basis. This will also help you develop a more meaningful relationship with your system administrators and application developers by asking them questions and learning more about these systems - both of which are highly encouraged. 
 
To detect patterns and trends, consider developing a rolling 30, 60 or 90 day history in a few critical areas to show not only the current status, but also how they compare to recent activity over time. This insight will help identify patterns that exist beyond the point in time alerts that we regularly receive. Not every area requires this extended analysis, but in some cases showing a trend over time reveals pattens that would otherwise go unrecognized and unnoticed.
 
Consider the following for your baseline
Administrative logins after normal business hours
Administrative logins outside of approved change windows
Badge access to your building after normal business hours
Systems that restart outside of approved change windows
Services that restart outside approved change windows
 
Please use the comments area to share what’s in your baseline!
 
Russell Eubanks
6 comment(s)

Comments

The reality is that the vast majority of businesses have no such thing as an "Approved change window" or there are so many exceptions that the term is meaningless for anybody except auditors, who only look at paperwork and attestations by managers. "Approved change windows" generally do not apply to test or development systems and there almost undoubtedly is live data in test and dev "so we can test things appropriately".

Unless your business is highly regulated there probably isn't even a formal change control process except for certain Sarbanes-Oxley systems.

This "Administrative logins after normal business hours" is a clear auditor check-box. Most major breaches recently have occurred during normal operating hours so they can hide in the noise. Think of the Target breach where the malware turned itself on at 10 AM and off at 5 PM. Think of the Anthem breach where a DBA noticed queries running under their account.

When an auditor asks how you check for failed logins after hours, ask them why they care. Point out that a failed login has no access to anything and what they should really be concerned about are successful logins during business hours BUT while the associated individual is out of the office. Since that would require the watchers to know when everyone is in or out of the office, the auditors will protest that "It's too hard!" True, but it's also a far more reliable indicator of nefarious activity then the ten year-old items they are concerned about.
On one pen test I was involved with we gained full admin access to the HR system and no one noticed. Why? Because three months earlier the HR system was migrated to new servers and a new version. What we broke into was the old HR system as it existed three months ago and that was still running so HR could run comparison queries if needed. But the system was never needed for that. No patches and no monitoring for three months. We could have formatted it and no one would have noticed and yes, all of the real data was still there.

How did we break in? Via a known vulnerability in the backup software agent. No administrative login needed yet we had administrative rights via the backup software.

On another pen test we used SNMP to query everything using "public" to see its uptime. We found one Windows system that had not been rebooted in over a year and a half. No reboots means no patches. Fortunately for us this was in 2008 so we used the ultra-reliable MS08-067 exploit to break in. It was a domain-joined system that had a service running under a domain admin account. Game Over for that domain.

No one who worked there even knew the system existed. They had frequent turnover so no one would touch anything they didn't know about.


If you're gathering endpoint logs, look for forgotten systems, the ones that have had no logins for a while.

Use your monitoring tools to look for systems, desktops and servers, that have not been rebooted in over a month. They're either forgotten systems or their patching is being neglected or is malfunctioning.

And yes, even Linux systems can need rebooting for kernel updates although perhaps not every month for mature versions. One place where Linux patching fails is when admins blindly apply patches without reading the documentation.

A common omission is not noticing that a service restart is needed for it to take effect. Think about httpd patches. Change control monitoring systems that look at files on disk like Tripwire will happily report that the system is fully patched but if the httpd service was not restarted then it's still vulnerable. From a security monitoring point of view, consider rebooting Linux systems each month as well. That will assure that any missed service restarts get handled and will help you detect forgotten systems. Unless you do it via a cron job, of course. Then it could still be a forgotten system.
Thanks for your comments. I agree with you that without an intentional change control program, it is exceptionally difficult to detect and respond to changes that occur.

Thanks for supporting the ISC!
Russell
Several great examples of where focusing on the basics, such as an up to date inventory of systems led to compromise. My favorite was “No one who worked there even knew the system existed”. Sad, true and perhaps a wakeup call to many of us who defend networks.

Thanks for supporting the ISC!
Russell
No problem. Trust me, you folks support me far more than I could ever support you. You're my home page at work.

In thinking about it, perhaps a good column would be to request "Lessons learned from pen tests" from either side, the testers or the tested. It's important that we learn from the mistakes of others before they happen to us.
Lessons from pen tests would be great!

Diary Archives