What did we Learn from WannaCry? - Oh Wait, We Already Knew That!

Published: 2017-05-23
Last Updated: 2017-05-23 14:59:46 UTC
by Rob VandenBrink (Version: 1)
1 comment(s)

In the aftermath of last week's excitement over the WannaCry malware, I've had a lot of "lessons learned" meetings with clients.  The results are exactly what you'd expect, but in some cases came as a surprise to the organizations we met with.  
There was a whole outcry about not "victim shaming" during and after this outbreak, and I get that, but in most cases infections were process failures that the IT group didn't know they had, these "lessons learned" sessions have contributed to improving the situation at many organizations.

The short list is below - affected companies had one or more of the issues below:


1/ Patch
Plain and simple, when vendor patches come out, apply them.  In a lot of cases, Patch Tuesday means "Reboot Wednesday" for a lot of organizations, or worst case "Reboot Saturday".  If you don't have a "test the patches" process, then in a lot of cases simply waiting a day or two (to let all the early birds test them for you) will do the job.  If you do have a test process, in today's world it truly needs to take 7 days or less.
There are some hosts that you won't be patching.  The million dollar MRI machine, the IV pump or the 20 ton punch press in the factory for instance.  But you know about those, and you've segmented them away (in an appropriate way) from the internet and your production assets.  This outbreak wasn't about those assets, what got hammered by Wannacry was the actual workstations and servers, the hospital stations in admitting and emergency room, the tablet that the nurse enters your stats into and so on.  Normal user workstations that either weren't patched, or were still running Windows XP.

That being said, there are always some hosts that can be patched, but can't be patched regularly.  The host that's running active military operations for instance, or the host that's running the callcenter for flood/rescue operations, e-health or suicide hotline.  But you can't give just up on those - in most cases there is redundancy in place so that you can update half of those clusters at a time.  If there isn't, you do still need to somehow get them updated on a regular schedule.

Lesson learned?  If your patch cycle is longer than a week, in today's world you need to revisit your process and somehow shorten it up.  Document your exceptions, put something in to mitigate that risk (network segmentation is a common one), and get Sr Management to sign off on the risk and the mitigation.

2/  Unknown Assets are waiting to Ambush You

A factor in this last attack were hosts that weren't in IT's inventory.  In my group of clients, what this meant was hosts controlling billboards or TV's running ad's in customer service areas (the menu board at the coffee shop, the screen telling you about retirement funds where you wait in line at the bank and so on).  If this had been a linux worm, we'd be talking about projectors, TVs and access points today.

One and all, I pointed those folks back to the Critical Controls list ( https://www.cisecurity.org/controls/ ).  In plain english, the first item is "know what's on your network" and the second item is "know what is running on what's on your network".

If you don't have a complete picture of these two, you will always be exposed to whatever new malware (or old malware) that "tests the locks" at your organization.

3/ Watch the News.
.... And I don't mean the news on TV.  Your vendors (in this case Microsoft) have news feeds, and there are a ton of security-related news sites, podcasts and feeds (this site is one of those, our StormCast podcast is another).  Folks that "watch the news" knew about this issue starting back in 2015, when Microsoft started advising us to disable SMB1, then again last year (2016) when Microsoft posted their "We're Pleading with you, PLEASE disable SMB1" post.  We knew specifically about the vulnerabilities used by Wannacry in  January when the Shadowbrokers dump happened, we knew again when the patches were released in March, and we knew (again, much more specifically) when those tools went live in April.  In short, we were TOLD that this was coming, by the time this was on the TV media, this was very old news.

4/ Segment your network, use host firewalls
In most networks, workstation A does not need SMB access to workstation B.  Neither of them need SMB access to the mail server or the SQL host.  They do need that access to the SMB based shares on the file and print servers though.  If you must have SMB version 1 at all, then you have some other significant issues to look at.
Really what this boils down to is the Critical Controls again.  Know what services are needed by who, and permit that.  Set up "deny" rules on the network or on host firewalls for the things that people don't need - or best case, set up denies for "everything else".  I do realize that this is not 100% practical.  For instance, denying SMB between workstations is a tough one to implement, since most admin tools need that same protocol.  Many organizations only allow SMB to workstations from server or management subnets, and that seems to work really nicely for them.  It's tough to get sign-off on that sort of restriction, management often will see this as a drastic measure. 

Disabling SMB1 should have happened months ago, if not year(s) ago.

5/ Have Backups
Many clients found out *after* they were infected by Wannacry that their users were storing data locally.  Don't be that company - either enforce central data storage, or make sure your users' local data is backed up somehow.  Getting users to sign off that their local data is ephemeral only, that it's not guaranteed to be there after a security event is good advice, but after said security event IT generally finds out that even with that signoff, everyone in the organization still holds them responsible.  

All to often, backups fall on the shoulders of the most Jr staff in IT.  Sometimes that works out really well, but all to often it means that backups aren't tested, restores fail (we call that "backing up air"), or critical data is missed.

Best just to back it your data (all your data) and be done with it.

6/ Have a Plan

You can't plan for everything, but everyone should have had a plan for the aftermath of Wannacry.  The remediation for this malware was the classic "nuke from orbit" - wipe the workstation's drives, re-image and move on.  This process should be crystal-clear, and the team of folks responsible to deliver on this plan should be similarly clear.

I had a number of clients who even a week after infection were still building their recovery process, while they were recovering. If you don't have an Incident Response Plan that includes widespread workstation re-imaging, it's likely time to revisit your IR plan!

7/ Security is not an IT thing
Security of the assets of the company are not just an IT thing, they're a company thing.  Sr Management doesn't always realize this, but this week is a good time to re-enforce this concept.  Failing on securing your workstations, servers, network and especially your data can knock a company offline, either for hours, days, or forever.  Putting this on the shoulders of the IT group alone isn't fair, as the budget and staffing approvals for this responsibility is often out of their hands.

Looking back over this list, it comes down to: Patch, Inventory, Keep tabs on Vendor and Industry news, Segment your network, Backup, and have an IR plan.  No shame and no finger-pointing, but we've all known this for 10-15-20 years (or more) - this was stuff we did in the '80's back when I started, and we've been doing since the '60's.  This is not a new list - we've been at this 50 years or more, we should know this by now.  But from what was on TV this past week, I guess we need a refresher?

Have I missed anything?  Please use our comment form if we need to add to this list!

===============
Rob VandenBrink
Compugen

1 comment(s)

Comments

A had a huge "i told you so" i got to lay on people; if you dont need it, dont run it/close it.

I had just finished closing all ports on workstations except 3389. Even my manager was like well thats going to make it harder to get stuff done in the future. I explained attack footpront to him...

I got to gloat. When i did a review of our sitatuation i was like, smb on all workstations is closed so its moot.

I love it when a plan comes together.

Also we just finisned segmentation so that was great as well.

Also i had an urgent ticket in the system for admins to patch ms17 010 for 3 months. They did not do it, they were scared, i now have political capitol to help them get a more formal patch process with sla, etc...

Diary Archives