On the importance of patching fast


Every month we create an overview of the patches released by Microsoft on black Tuesday. Over the years we learned that our readers like to have our idea of what to patch more urgently than what else, mainly due to them getting burned with patches that broke other stuff.

While I create many of those overviews with very important help behind the scenes of the rest of the handlers, the cycle we collectively implement to delay patching is something that keeps me concerned as it might very well be just not fast enough. Personally, I think we might need to evolve our re-testing (the vendor already tested) of patches to be far more lean and mean.

Especially since the amount of feedback we get on a monthly basis of the Microsoft patches causing trouble dwindled to a very tiny amount of really minor issues, I feel we have helped build a too heavy process in many organizations that results in patches being deployed rather slowly. Perhaps too slow, see the cautionary tale below.


PHPList is an open-source newsletter manager. It is written in php. On January 29th 2009 they posted a software update. "[The update] fixes a local file include vulnerability.This vulnerability allows attackers to display the contents of files on the server, which can aid them to gain unauthorised access".

They also included a one-line workaround if you could not patch fast enough.


phpBB Is an open-source bulletin board software. It is written in php as well, but product-wise the relation with PHPlist stops there. The www.phpBB.com server however had the PHPlist software installed and February 1st 2009 -merely 3 days later-, they got hit by an attack against PHPlist.

The attack was not only successful, but the attackers got hold of the list of email addresses in the phpBB announcement list, the encrypted passwords of all of the users of the phpBB forum on phpBB.com, and published that.

While the phpBB software was not the path the attackers followed to get on the server, the impact is for all users of phpBB.com's forum and mailing lists, many of them being administrators of phpBB installations. Let's hope they do not use the same login/password combination elsewhere.

Learn lessons

We can either learn from falling our selves, and standing up again, or we can try to be a bit smarter and also learn from what made others fall.

How long would your organization have taken to have a roll-out of a patch released on Thursday? Would it have been implemented on all servers well before Sunday?

Are we ready to patch in less than 3 days time, even if that includes a weekend? If not, we might need to accelerate things:

  • How do we find out about patches being available ? Make sure we're warned for all software we use!
  • How to we test (if we test?) and validate we're going to implement it in production? Even if it is a weekend?
  • How do we choose between the turn-around times of a workaround vs. that of a full patch ?

The odds in this game are high. All the attacker has to do is find a single hole. While it's our job to fix them all. Moreover the reputation of our respective organizations depends on our success at staying safe and ahead of all the attackers.

Swa Frantzen -- Section 66


760 Posts
Feb 3rd 2009
A suite of automated tests is good practice for software development, but these do not always ship with a release build (especially with proprietary software). If they did, users of the software could run these tests after patching as a safeguard against newly-introduced problems that the developer missed. An auto-update tool could run these automatically after patching, and roll back the patch and show warnings if the tests failed, maybe providing the vendor with useful debugging info.

Perhaps some organisations could benefit from writing their own suite of automated tests for their own environment, in the case of GUI apps maybe even 'screen-driving' (controlling the mouse/keyboard to test functionality) and then testing the final output file or screen display; ensuring all important functionality works as it needs to. These could be run after applying new patches to give some reassurance to the administrator that the software still works, and to hopefully pick up on any issues before site-wide deployment happens.
Steven C.

171 Posts
Another important lesson to learn from the incident: do not EVER reuse a password for two systems that do not belong to the same security boundary, in particular on web-based systems.

If possible, use different passwords everywhere and manage then through password management tools.
Steven C.
16 Posts
or better still, use an algorithm for your passwords, then you dont need a password manager, and ever site you log onto has a different password (and a generic password reminder, which in my case, reminds me how to build the algorithm, something like \"where am I?\")

2 Posts

Sign Up for Free or Log In to start participating in the conversation!