Defensible network architecture

Published: 2015-01-05
Last Updated: 2015-01-05 02:52:09 UTC
by Rick Wanner (Version: 1)
6 comment(s)

For the nearly 20 years since Zwicky, Cooper and Chapman first wrote about Firewalls the firewall has been the primary defense mechanism of nearly every entity attached to the Internet.  While perimeter protection is still important in the modern enterprise, the fact is that the nature of Internet business has vastly changed and the crunchy perimeter and squishy inside approach has long since become outdated.   You can’t deny what you must permit and the primary attack vectors today appear to be email and browser exploits; two aspects of your business model that you cannot do without and which can give the bad guys a foothold inside your perimeter protections.

As the Sony, Target, Home Depot, and many other breaches have shown, once the bad guys are into the network they are content to dig in, explore, and exfiltrate large amounts of data and will often go undetected for months.  What is needed is a security architecture that focuses on protecting data and detecting anomalies. A security architecture that results in a network that is capable of defending itself from the bad guys.

Richard Bejtlich introduced the concept of a defensible network architecture over 10 years ago in his books, but the concepts are even more important today, and are gaining traction, but have not reached widespread adoption in the security industry.

What does a defensible network architecture look like today? In my opinion these are the minimum fundamentals to aim for in a modern defensible security design:

Segregation

The fact is that most enterprise networks are very flat and provide little resistance once the network perimeter is breached. Desktops are the most likely ingress vector for malware, yet most organization’s desktop LAN’s are very flat and desktops often have virtually unimpeded access to the entire network. Creating segregation between the desktop LAN and the critical data, stored on servers, is a huge step in impeding a breach of the network.  The fact is that desktops do not normally need to communicate with other desktops.  So the first step would be to segregate desktops from each other to limit desktop reconnaissance and worm type propagation between desktops. Second, the desktop LAN should be treated as a hostile network and should only be permitted access to the minimum data required to do business.

Servers should be segregated from each other, and from the desktop LAN, with firewalls.  Access to the servers must be limited only to communication on ports required to deliver the business functions of the server.  This applies to desktops as well.  Only a chosen few who require administrative access to perform their responsibilities should have administrative access.  Why a firewall for segregation, not VLANs or some other method?  The firewall gives you detailed logging which can be used as an audit trail for incident response purposes.

Instrumentation

Dr. Eric Cole has always evangelized that “Prevention is ideal, detection is a must”.  Prevention is a laudable goal, but the fact is that prevention, in most cases, is hard.  Aim for prevention where you can, but assume that whatever preventive controls you deploy are going to be defeated or fail. When the controls do fail would you notice?  Sony failed to detect a number of Terabytes of data leaving their network.  Would you notice it leaving yours?  It is essential is to instrument your network so that detection is possible.  There are two essential elements to minimal detection. The first is properly installed and managed intrusion detection, preferably at the network and host level.  But even importantly is network instrumentation that will permit you to detect network anomalies.  The goal is to notice deviations from the norm, in order to notice those deviations it is important to understand what the network baseline is. NetFlow data is sufficient here, but there are many network products that will provide network instrumentation that can be used for alerting, and monitoring of the network and will be critical in the investigation of a breach.

Application whitelisting

Let’s get it out of the way…signature based anti-virus may not be dead, but it is on life support. The model of protecting hosts based on known malware threats is badly broken in this era of ever-changing malware.  Proactive patching is definitely a step in the correct direction as far as plugging known vulnerabilities, but user behaviour is still the weak link in the malware chain and removing users from the picture is not a practical solution. The best approach is to apply a similar approach as network access; restricting access to the minimum required to do business, to the host.    Deny all behaviours on the host except for those required by the applications to do business.  This is a huge shift in host architecture that is probably going to be met with a lot of resistance from SysAdmins and application owners, but it is one of the few practical approaches to host defense that provides any practical likelihood of successful host defense. Application whitelisting has been available since early last decade, but it is only in the last few years that these products have matured to the point of being manageable. 

This is my approach.  What suggestions do you have for creating a defensible network architecture?

-- Rick Wanner MSISE - rwanner at isc dot sans dot edu - http://namedeplume.blogspot.com/ - Twitter:namedeplume (Protected)

Keywords:
6 comment(s)

Comments

Stop treating patching and patch administration as a junior level task. As ARAMCO proved, your patch deployment system such as SCCM can be used against you to deploy malicious software in an approved manner.
Many thanks for your thoughts.
How do you plan to segregate your desktops when IPv6 basically connects everything to everything. I work mainly with small networks, so one subnet, with IPv4 we could giver everything a fixed IP address and lock things down pretty well. But with IPv6 local link I am at a loss as to how to segregate.
IPv6 isn't really that different. You do have a couple of options.

First of all, use DHCPv6, and don't use router advertisements to hand out addresses. With that, you will still have link local addresses that are MAC address derived, but they can only be used internally. Your external addresses will be assigned via DHCP, and you have all the options you had in IPv4. For example you can limit the address range to a small "pool", or assign static addresses (just remember that you need to use the DUID, not the MAC address to identify hosts).

Secondly, on your firewall, only allow packet to exit the network if the source IP is within the DHCP pool's range.

For a more advanced setup, don't use global IPv6 addresses at all in your network. Instead, use Unique Local addresses. (fd00::/8). They essentially work like RFC1918 addresses in IPv4. Now you can use proxies to connect to the outside, or, dare I say it, NAT. IPv6 routers start to implement NAT, sometimes disguised as "prefix translation". But be aware that it does work a bit different then what you likely do in IPv4, and is more like a 1-to-1 NAT setup then a 1-many setup you are likely using in IPv4. So firewall rules are still important.
I absolutely agree that the desktop LAN should be segregated in and treated as hostile but do you think the effort required to segregate each desktop is risk justified in most cases? I would have thought that would require a big admin overhead?
Also re. application whitelisting; as you say this is a huge challenge for desktop support teams. The only time I've seen it implemented is where it is introduced as part of a migration to virtual desktops. Have you come a cross anyone retrofitting this to a 'traditional' desktop environment?
[quote=comment#32911]I absolutely agree that the desktop LAN should be segregated in and treated as hostile but do you think the effort required to segregate each desktop is risk justified in most cases? I would have thought that would require a big admin overhead?
[/quote]

One of the things I was going to suggest (falling under "instrumentation") was setting up some honeypots in user-space as well as in your server subnets. A carefully setup honeypot should never have any traffic sent to/from it except for some obvious exceptions (like monitoring the honeypot itself, syslogging, etc). So any traffic the honeypot sees is "suspicious" to some degree.

Also, I'd stress that all the NIDS sensors should be monitoring traffic that crosses any security boundary, not just ingress/egress traffic. Otherwise segregating users from servers is only an attempt at "prevention" and not "detection". At $DAYJOB$ we monitor perimeter traffic, traffic crossing office/geographic boundaries and traffic crossing any security threshold (in/out of a DMZ, between user space and server space, etc). And don't forget that no one vendor detects everything. It's a good idea to have more than one tool watching the traffic.

Lastly, I'd also recommend having the NIDS log certain classes of traffic to separate databases. For instance, a snort sensor monitoring traffic between the interwobble and the DMZ segments is going to log LOTS of noisy portscans/probes/etc. This is useful but noisy and all that logged data could obscure that one suspicious PSEXEC that snort waves a red flag about between one server and another or between a desktop and a server - and THAT'S far more important to detect (IMHO) than a jillion or so probes of your externally facing websites. :-)
Regarding application whitelisting, Software Restriction Policy has worked pretty well for me. I have a writeup on it at mechbgon dot com/srp which also links on to further info by none other than the NSA.

The traditional objection to application whitelisting seems to be 'but how would we keep the whitelist up-to-date?' but with SRP used as I describe, it doesn't apply to the OS's directory or the Program Files directory. Anything you place in those directories using your Admin rights or your software deployment system is therefore whitelisted with no further ado. "User space" is blacklisted, so the user could download invoice.pdf.exe and try to run it from their profile, but would get shot down. Ditto for an exploit that leverages their privilege level.

Anyway, works well for me.

Diary Archives