It has been a month and a bit how is your new patching program holding up?

Published: 2017-06-21
Last Updated: 2017-06-21 13:57:33 UTC
by Mark Hofman (Version: 1)
3 comment(s)

Last month's entertainment for many of us was of course the wannacray ms17-010 update. For some of you it was a relaxing time just like any other month.  Unfortunately for the rest of us it was a rather busy period trying to patch systems that in some cases had not been patched in months or even years. Others discovered that whilst security teams have been saying "you want to open what port to the internet?" firewall rules were approved allowing port 445 and in other cases even 139. Another group of users discovered that the firewall that used to be enabled on their laptop was no longer enabled whilst connected to the internet.  Anyway, that was last month.  On the back of it we all made improvements to our vulnerability management processes. You did, right?

Ok, maybe not yet, people are still hurting.  However, when an event like this happens it is a good opportunity to revisit the process that has failed, identify why it went wrong for you and make improvements.  Not the sexy part of security, but we can't all be threathunting 24/7. 

If you haven't started yet or the new process isn't quite where it needs to be where do you start? 
Maybe start with how fast or slow should you patch?  Various standards suggest that you must be able to patch critical and high risk issues within 48 hours. Not impossible if you approach it the right way, but you do need to have the right things in place to make this happen.  
You will need: 

  • Asset information - you need to know what you have, how critical it is and of course what is installed on it.  Look at each system you have, evaluate the confidentiality, integrity and availability requirements of the system and categorise the systems into critical and less critical systems to the organisation. 
  • Vulnerability/Patch information - you need information from vendors, open source and commercial alike. Subscribe to the various lists, get a local RSS feed, etc. Vendors are generally quite keen to let you known once they have a patch. 
  • Assessment method – The information received needs to be evaluated.  Review the issue.  Are the systems you have vulnerable? Are those systems that are vulnerable flagged as important to the business? If the answer is yes to both questions (you may have more), then they go on the “must patch” now list.  The assessment method should contain a step to document your decision. This will keep auditors happy, but also allows you to better manage risk.  
  • Testing Regime – Speed in patching processes comes from the ability to test the required functionality quickly and the reliability of those tests. Having standard tests or even better automated tests can speed up the validation process allowing patching to continue.  

Once you have the four core ingredients you are now in a position to know what vulnerabilities are present and hopefully patchable. You know the systems that are most affected by them and have the highest level of risk to the organisation. 

The actual mechanics of patching is individual to each organisation. Most of us however will be using something like WSUS, SCCM or Third-party patching products and/or their linux equivalents like satellite, puppet, chef, etc. In the tool used, define the various categories of systems you have, reflecting their criticality. Ideally have a test group for each, Dev or UAT environments if you have them can be great for this. I also often create a “The Rest” group.  This category contains servers that have a low criticality and can be rebooted without much notice.  For desktops, I often create a test group, a pilot group and a group for all remaining desktops.   The pilot group has representative of most if not all types of desktops/notebooks used in the organisation. 

When patches are released they are evaluated and if they are to be pushed they are released to the test groups as soon as possible. Basic functionality and security testing is completed to make sure that patches are not causing issues. Depending on the organisation we often push DEV environments first, then UAT after a cycle of testing.  Within a few hours of being released you should have some level of confidence that the patches are not going to cause issues. Your timezone may even help you here.  In AU for example patches are often released during the middle of our night. Which means in other countries they may already have encountered issues and reported them (keep an eye the ISC site) before we start patching. 
The next step is to release the patch to ”The Rest” group and for desktops to the pilot group.  Again, testing is conducted to get confidence the patch is not causing issues. Remember these are low criticality servers and desktops. Once happy start scheduling the production releases. Post reboot run the various tests to restore confidence in the system and you are done.  

The biggest challenge in the process is getting a maintenance window to reboot. The best defence against having your window denied is to schedule them in advance and get the various business areas to agree to them.  Patch releases are pretty regular so they can be scheduled ahead of time. I like working one or even two years in advance.  

The second challenge is the testing of systems post patching. This will take the most prep work. Some organisations will need to get people to test systems. Some may be able to automate tests.  If you need people, organise test teams and schedule their availability ahead of time to help streamline your process. Anything that can be done to get confidence in the patched system faster will help meet the 48 hour deadline. 

If going fast is too daunting, make the improvements in baby steps.  If you generally patch every 3 months. Implement your own ideas, or some of the above and see if you can reduce it to two months.  Once that is achieved try and reduce it further.  

If you have your own thoughts on how people can improve their processes, or you have failed (we can all learn from failures) then please share.  The next time there is something similar to wannacry we all want to be able to say “sorted that ages ago”. 

Mark H - Shearwater
 

Keywords:
3 comment(s)

Comments

We use a third party patching product and a vulnerability scanner together. After being in the patch, scan, repeat cycle for a few years, I would never patch without a followup scan. We always find a few hosts where the patching agent broke, or someone forgot to install it.
Note: the scan has to be authenticated (scanner can log in to the targets) for it to do any good, as a purely external scan rarely finds missing patches that don't expose ports.
Note: this is mainly for workstations. We are more careful with servers, but they still need patch, scan, ...
That is generally what we do as well. There are lots of other cool things you can do with your vulnerability scanner. Keep an eye out in future diaries.

Mark
The "wannacry" outbreak was a huge wake up call at our organization and helped us kick start a monthly patching regimen. The threat was very effective in silencing all the detractors that did not want to be bothered by the necessary disruptions patching and equipment reboots cause.

We plan to follow this up with vulnerability scans as well, but wonder what compliance standards to use. I guess the question is how much is too much or too little?

VM

Diary Archives