Threat Level: green Handler on Duty: Russ McRee

SANS ISC: "Too Important to Patch" - Wait? What? - SANS Internet Storm Center SANS ISC InfoSec Forums


Sign Up for Free!   Forgot Password?
Log In or Sign Up for Free!
"Too Important to Patch" - Wait? What?

I recently had a routine "can you help our business partner" type call from a client. Their business partner could receive email from them, but could not send email to them.

After a bit of digging in the SMTP header of a failed note, it turned out that the business partner was running a very old version of QMAIL, which has a problem with ESMTP and DNS responses larger than 512 bytes. My client (the destination for the email) had recently gone to an email scanning service, so the total return on an MX record request was well over 1.5kb.

So far, not so exciting, you say - patch the server and be done with it! So why am I writing this up on isc.edu?

This is where it gets interesting. I called the business partner, and their verbatim response was "Gee, I don't know. Applying that patch will involve taking the mail server down, our CEO won't approve that. Is there some other way to do this?"

Wait, what? Did I hear that right? Let me check my watch - what century is this again? This is a patch from 2007 for goodness sake! I can see needing to follow a change control procedure, schedule an outage, maybe for after-hours, but they are an application development shop, not the Department of Defense! If they're running a mail system that hasn't been patched in 4 years, chances are that someone else already owns them, and they've got bigger problems than just this.

Anyway, after a frank and honest (and tactful, though that part was a bit more difficult) discussion, they did apply the needed patch, along with a truckload of other system updates that had been delayed since forever.

I've encountered a few situations where it makes some snse for system admins to defer patching for extended periods of time:

Servers that support military personnel in active operations are often mandated by policy as "frozen". In our current global environment, these freeze periods can extend into months and years.

Servers that support long-range space exploration missions will often end up running operating systems that are no longer supported, on hardware that has been end-of-lifed years ago, or on hardware or OS's that were one-shot custom efforts. In cases like this, the hardware is generally air-gapped or otherwise isolated from sources of attack.

Some servers in support-challenged situations might also be "frozen" for specified periods of time - if I remember correctly, the servers in some of the Antarctic missions (really, no pun!) are in this category. (If I'm mistaken on this example, I know that sysadmin for those systems is a reader, please correct me!)

 

So the question I have for our readers is: What situations or applications have you seen that might defer patches and updates for an extended periods of time? Did you consider those reasons or policies to be legitimate? Did you come up with a compromise or workaround to get patches applied, or did you have to follow policy and not apply updates? Did this end up with a system compromise, and if so, did the policy protect the system administrator, or did they end up taking the blame anyway?

I'm really looking forward to feedback from our readers on this, please use the contact form to let us know what you've seen!

 

===============

Rob VandenBrink Metafore

Rob VandenBrink

512 Posts
ISC Handler
> What situations or applications have you seen that might defer patches and updates for an extended periods of time?
- Ignorance and/or arrogance (It's never happened to us before, so it won't happen now).
> Did you consider those reasons or policies to be legitimate?
No.
> Did you come up with a compromise or workaround to get patches applied, or did you have to follow policy and not apply updates?
- Followed their decision/policy. It's not my network - they are the owners.
> Did this end up with a system compromise, and if so, did the policy protect the system administrator, or did they end up taking the blame anyway?
- No - they lucked out. But you know who the fall guy is...
.
Jack

160 Posts
If the up time of a mail server is that important, it sounds like they need a 2nd mail exchanger and/or a cluster for mail box storage and access. Sure this is a more complicated setup but as a server admin I'd rather spend some extra time on design and setup to make maintenance more convenient for me and non-disruptive for those who rely on the services provided.
James

12 Posts
I have customers using radio dispatch consoles that are still on WinXP SP1 with no antivirus.

However these consoles are not permitted to see the world and no one is allowed access to the boxes besides me. So while I worry that someone will come along and plug an infected USB stick into the box I have to hope that the rules and locks are enough...
chpalmer

2 Posts
On a second note...

My little company has a backup email server running HMail at another location to avoid loosing incoming email... Would this option not suffice for these guys for an overnight cutover?? Yikes!
chpalmer

2 Posts
We're currently going through a move of our entire IT infrastructure from one provider to another, and there has been a change freeze instituted on all servers for the duration of the move (6 months).
chpalmer
2 Posts
Assuming this mail-server is too important to take it down for some minutes, they should have various fall-back solutions. What about patching these fall-backs before patching the live system? So there is just a downtime for hopefully less than some seconds. Of course they have also some time to test the patched fall-back-system.. I've no sympathy for such minds...
Martin Scharm

2 Posts
To the original question...i think the rule of "if it isnt broke, dont fix it" applies to most orgs, sadly. The feedback I've gotten from peers in the industry is, why risk patching and rebooting something that is working without issue. It doesn't make sense to me (us security proffesionals), but it does to them.
Anonymous
Re: PC Tech referring to the "fall guy."

There is a reason for an email trail of you explaining the importance of the updates and what could happen if they do not do it. You keep all correspondence with your explanation to why the patches are important and their "no" responses. This is CYA so WHEN they do get hacked they cannot say it is your fault and any attempt to do so could make them libel.
Lee

13 Posts
Medical systems.
Either due to FDA controls on the system/app OR vendor's own very odd (read broken) requirements.

Then there's the "exclude the following directories from virus scanning" requirements.
CBob

22 Posts
so in this day and age of patch flash player 3x per week, patching is a bit different. The processes should be the same, and the fact is most small companies do not have the resources to keep up, especially if they were to follow the proper sdlc/ change control processes.

Unfortunately, patches are just as good as the software they fix, and have unintended side effects like breaking applications. And as we know, it is all about the apps!

So, scarce resources will be applied to the big problems, like how to allow cool device of the day to work with exchange. Issues like what version of qmail is running where and whether or not it should be patched will always get fiftg billing or less on the priority scale ... Until something breaks.

So, in conclusion. Testing patches properly is risky. Failure to do so means that when some app breaks, it is a sin for which there may be no forgiveness. Instead, make the cool new toys work. Anything that breaks doing that will be forgiven. Wait for the dull stuff to break on its own. That is the cmm level 1 patching process in brief.

Moving to a hygienic system management process involves discipline and management interest, too characteristics that are often hard to find in the chaotic world of systems administration, imho.
CBob
1 Posts
I have encountered a number of clients where one or more 'servers' that are part of the Phone System, either VoIP, or simply a configuration/control system, are not patched. The usual story is "The vendor won't support it if service packs or patches are applied".

One of these customers recently joined one such server to their domain, resulting in the ability of Conficker to spread through their environment, since the worm now had the ability to impersonate the domain admin account used to join the server to the domain.

My usual recommendation with these types of servers is to try to segregate them from the rest of the network in order to reduce their exposure to network-borne malware, and also to protect the network from anything these servers might inadvertantly pick up. Don't allow anything in or out, except the one or two ports required to manage the system, and try to lock those down to as few systems as possible as well.
John

4 Posts
This is, sadly, not uncommon. There are some management teams who panic the moment anyone proposes changing anything in production (yes, even installing official patches) because they're terrified of downtime. There are teams of engineers who'll storm offices with torches and pitchforks whenever patching is brought up (if they're that afraid of security fixes, you have to wonder how fragile their code is). And yes, there are vendors who'll threaten to terminate your support contract if you patch the OS (I've dealt with a few such companies in the past). This is why it's a good idea to back up the trail of e-mail (with cryptographic signatures of messages, if applicable) to cold storage, just in case they try to pin failures (or compromises - that happens sometimes, too) on you.

I do not consider these policies to be legitimate; they are all too often used to hide the incompetance of engineers or providers.

@John: re: VoIP vendors refusing support.. You dealt with Sylantro, too?
No Love.

37 Posts
This is a common theme I've run across many times. Our current environment is way behind on Java, simply because our ERP won't support it. There are the threats of cutting support from the vendor and everyone just continues to worry about it, but do nothing when I report on another round of patches.
I have had some success at other companies, by telling them to keep one machine un-patched just to use for support calls to the vendor. This machine of course is turned off and only used for support, not daily use.
Security is a strange world to work in.
RobM

14 Posts
This approach is common place in most organizations. There is always a subset of servers/workstations that run applications directly tied to revenue generation and will be the last system to be patched. You can argue that these systems should be kept is tip top shape since they are “money makers” but when IT security challenges Managers that have C-Level execs on speed dial, you are bound to have a disappointing day.
What is all comes down to is blame. I have found very few IT organizations and business groups that truly care about IT security. The only concern they have is “can I be blamed for this?” Patching, upgrades, feature releases, migrations, DRP/BCP all of these events require change which in turn will face opposition, until the powers at be can ascertain who will be liable for a the failure that follows the proposed change. I hate to say it, but I have approached managers in the past and assured them that another group will take the fall in order for patches to get installed.
I think most IT professionals realize that Management and the user community would be ok with a “not broke don’t fix it” approach because it keeps part of their professional lives predictable and lowers stress (until a compromise occurs). Patching breaks things. But I’d rather break something with a backout plan than try to fix an event that gave me no warning.
RobM
10 Posts
We have SCADA systems which are not patched. They are air gapped and remote access is via a KVM. VPN remote access to the KVM involves two firewalls, three different sets of credentials on two domains and two factor auth using a token [ not an RSA one :) ]
Anonymous
I generally evaluate every vulnerability in applications / services and then determine if we are really at risk. There may be times when we are not, i.e. a compiled from source app has a vuln in a feature we did not enable at compile time. There also may be, the previous example, where a newer version of the softwares bug may offer a greater risk to us than the older version we are using. The main thing is to stay on top of the software and stay diligent in fixing where needed but not simply rushing out to deploy new software if you are not affected.
Ed

1 Posts
I know of one large ERP/Accounting system that checked the version of SQL server upon start-up, and would fail to run if the version didn't match what was hard-coded in the application. Needless to say, so much for patching SQL. Oh yeah, and one other thing(slightly OT), the VAR/Integrator of this said system couldn't believe that none of the users in our accounting dept. had local administrative rights, "but all of our other clients give local administrative rights to their users!"; insert foot in mouth, chew vigorously. After several frustrating days of ProcessMonitor meets "we are too lazy to code properly" ERP software, we got it working under a regular user.

As echoed earlier, the issue is "don't fix what ain't broke", its running, and working quite nicely, so don't rock the boat. It might be the organization itself, or some software package sold to them, its all the same; either the IT dept. doesn't want to, or doesn't have the know how; to sort out issues caused by a patch, or the developers are too lazy to write code that could be somewhat tolerant of underlying OS and application updates. As Joe H. put it so nicely, its all about "the next cool thing"...
e.b.

16 Posts
0) on the 'no updates because vendor retract support' point:
that is one argument for using opensource projects as
they tend to be up to date due to the community interest because
the maintainers are also users.

1) agree its sad how lack of a redundancy leaves
sites with no room to implement updates or even dealing with outages. the nightmare continues.

far too often is hard to sell the difference of [1 site/set_of_servers/set_of_lines] becoming [multiple sites/multiple_clusters/multiple_lines].

though at least when the customer grows large/high_profile enough, the lost worker time and bad press
will solve the problem...for the next techie at least. ;')
joco

8 Posts
Its true, in a perfect world, patches would not break other software, and systems would be kept up to date.

But the fact is, patches of one piece of software can and do break others, even when programs are coded 'properly' (e.g. program 2 inadvertently relies on buggy behavior of a subroutine of program 1, and then program 1 gets patched... remember certain older Microsoft dll's, anyone?). This will likely always be.

Given that, why must we rely on patching said software to mitigate security vulnerabilities? There must be a better way, one that doesn't rely on patching the software...
joco
5 Posts
I forgot to answer the question when I went on my little rant. I work in the Space Industry and I can tell you that patching is not possible, no way, no how on some of the key systems. You would not believe how vendors tell the G'ment that if they so much as look at the "appliance" wrong they will not warranty it. Also, because sending a desktop support person to the ISS is pricey, we are VERY careful what we patch.
joco
10 Posts

Sign Up for Free or Log In to start participating in the conversation!