Last Updated: 2016-07-10 16:10:31 UTC
by Kevin Liston (Version: 1)
While at SANSFire a few weeks ago, I had the good fortune to sit in on Robert M. Lee as he taught ICS515: ICS Active Defense and Incident Response (https://www.sans.org/course/industrial-control-system-active-defense-and-incident-response). I'm not responsible for defending a power-plant's network nor do I have a manufacturing floor in my enterprise. I've also not worked with Modbus outside of CyberCity (https://www.sans.org/netwars/cybercity). However, like many of you, I have certain business-critical systems running on legacy hardware or requiring now-unsupported Operating Systems. These are the systems that you can't patch, or that even if they experience a compromise, you can't immediately shut them down. How to you secure networks with such constraints?
Architecture and Isolation
"Why are these even connected to the Internet to begin with?" many would ask (see the last entry, "Pentesters (and Attackers) Love Internet Connected Security Cameras!" (https://isc.sans.edu/forums/diary/Pentesters+and+Attackers+Love+Internet+Connected+Security+Cameras/21231/) for more examples of the problem.)
That is obviously step one, don't connect critical systems directly to the internet. But what about your internal, flat network? You may find yourself responsible for such a situation, how do you go about rearchitecting the network? Start small by isolating the critical systems from the general network. That MRI machine that you're not allowed to patch, probably shouldn't visible from everywhere in your network, also it probably shouldn't be able to go everywhere in your network, or perhaps even to the internet (it's not like it's reaching out to get regular updates or anything.)
If you have a poorly-architected network, you probably have a poorly instrumented one. Kill two-birds with one box by dropping in a system between your general network and your critical systems. This will act as both firewall and sensor. Your critical systems won't be moving as much traffic as your perimeter and general network, so take the opportunity to collect full packet captures, or just run Bro to extract certain artifacts and keep netflow or ipfix data, this will be used later...
Critical Systems do Critical Things, and only Critical Things
Just as you limit access to and from the critical systems, lock down what these can be used for. These systems shouldn't have general internet access, if they require certain access run a proxy for them to constrain that access. They also shouldn't be getting email (although they might be sending it.) It's simply "policy of least privilege" re-tuned a little.
You've probably run into some resistance trying to get application whitelisting deployed out to your general users. These critical systems are perfect locations to get started with the technique. If any systems have formal change-control policies, these are likely the systems, they update rarely, and you want to be alerted at any changes made to the system as soon as possible.
These should probably be your most-instrumented systems on your network. New services shouldn't be appearing and disappearing, the files on the system should remain relatively static, so something like tripwire or samhain (http://www.la-samhna.de/samhain/) won't be generating a lot of alerts, and if they do, they'll not likely be false alarms.
This is also a good place to start testing any anomally detection tools. These systems are tied closely to your business, so they'll likely mimic the activity cycles of your business. Changes to their regular cycles should be detected and scrutinized.
Other Things I Learned...
There are going to be times when you just can't immediately follow your IR process, either the system is to unstable support your forensic agents, or a business process will trump your urge to clean up a malware infection for a period of time. This class gave me a framework for handling those decisions as well as providing more options than simply "nuke it from orbit."
It also gave me more context around how Indicators of Compromise (IoC) are created and used in the ICS community. In my circles an IoC of "DNS traffic to 184.108.40.206" is usually scoffed at. While that might be common behavior in a general user population, you probably shouldn't see it coming from you city's traffic-control network.
Why Should You Care?
On the surface, the ICS environment looks untenable: you can rarely patch, uptime is paramount, security is an afterthought in software development. However, it is defensible. If it can be done in this environment, it can be done in yours as well.
If you don't think you've got ICS equipment in your environment, ask yourself a couple of questions. "Do I have a building that's four or more stories tall?" If yes, you likely have some sort of building management solution in place. Look for BACnet (look for UDP/47808.)
"Do I have a datacenter?" If yes, then you likely have industrial Uninterruptible Power Supply units, air handlers, and coolers. Say, who manages those for you? Do they VPN to do that? Do they have a cellular card in them for remote management?