Should it be Mandatory to have an Independent Security Audit after a Breach?

Published: 2015-03-07
Last Updated: 2015-03-07 22:18:55 UTC
by Guy Bruneau (Version: 1)
8 comment(s)

Security breaches seem to be the norm now. Home Depot, Target, Sony, JP Morgan Chase to name a few who have been in the recent past, victim of "sophisticated" system compromise which ultimately lead to sensitive information leaked to the open. It is difficult to tell how sophisticated the attack was since we rarely ever see a report how the attack took place and what could have been done to prevent it (remember the last step of incident response).

One of the latest victims is Anthem Inc. who may have been compromised as early as December 2014 over a period of several weeks. For those who have been victims of this attack, Antem setup a website to “signup for Identity Theft Repair & Credit Monitoring Services”.

Coming back to my question, should it be mandatory to have an independent security audit performed against the affected systems after a severe breach? The result of the report is made available to the victims to help them regain trust their data is secure and whenever necessary, is encrypted and protected. What do you think?

[1] https://www.anthem.com/health-insurance/home/overview
[2] https://www.anthemfacts.com
[3] http://www.oas.org/cyber/documents/IRM-5-Malicious-Network-Behaviour.pdf

-----------

Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu

8 comment(s)

Comments

I sounds good at first glance, but then consider an attacker who sets himself up as a victim so he can get the report. He learns more about the system than you want him to know. So his strategy is 1. Become a customer. 2. launch a simple/small attack -- just enough to trigger an audit. 3. victimize himself to get a copy of the audit report. 4. use that report to develop a really bad attack.
Interesting idea, but is there room for an exec-summary sort of response that says, "yes, the problem was like this, and here's how we know it is better now"?
We received our letter from Anthem on Friday, blah, blah how fast they got on it and 2 years of monitoring blah, blah, pat us on the back, we caught it fast. Really, you were in reactionary mode Dolts!!

Now to question at hand, YES! Especially when breaches like this cross multiple laws and government regulations, PCI, HIPAA. Instead, they offer a crap service, combine that with people's poor memory (or next sale item) all is ok. (Those fuzzy slippers feel good when you first put them on)

Oddly enough, if you have a sexual harassment, or discrimination suit, who do they bring in? Outside sources to make sure it does not happen again.


@ Moriah.. need really, really long arms for that scenario. Anthem as others were breached from the inside, code was already compromised before anyone from the outside. (What is with the "he" are you saying "she" is not smart enough?)
Anthem is a covered entity under HIPAA.
HIPAA audit requirements seem to only go into policies, procedures, and log evidence.
From what I can tell, detailed language specifying configuration and vulnerability scanning for an audit is lacking.
Reference - http://www.hhs.gov/ocr/privacy/hipaa/enforcement/audit/protocol.html
(NOTE: a search for the words "scan" and/or "vulnerability" turn up nothing in that page. "Configuration" is found once and only in regards to encryption)

Prior to the incident, Anthem had denied OIG the ability to connect an OIG owned and operated system to their computing environment(s) that would be used conduct an automated scan. Anthem based this on company policy prohibiting non-company owned and operated computers to be connected to their computing environment(s).
The same denial is being used for the "after action" audit that OIG is requesting.
From what I can tell, Anthem is on legal ground to continue to deny such a request. It may not be the best marketing decision at this point, but it still appears to be legal.

To be clear, I believe automated configuration and vulnerability scanning is a powerful risk management tool.
I use such tools on a regular basis for managing risk within my organization and I highly recommend such activity to others... on a regular basis!
Conducting such scanning regularly helps answer the "What do we know about ourselves today?" question.

The question that everyone should ask is "how far" should government entities be allowed to act in regards to audits on private entities? From some perspectives, this is similar to laws requiring attention to due process when the government intends or is required to conduct searches of private property.

It is my opinion that a bridging approach to this auditing dilemma is to develop legal regulations that require a covered private entity to have certified tools to conduct scans using a specific measurement baseline. The NIST SCAP program would be a good reference for how that could be accomplished. Upon an audit request, the entity could then conduct the scan and provide the results to the auditor in good faith.
This would allow the government to validate/verify that certain configuration and vulnerability management controls are implemented and functioning while providing the private entity a reasonable privacy and oversight mechanism for an audit.

Do I think configuration and vulnerability scanning should be conducted?:
YES!!

Do I think the government be allowed to use their own systems to conduct the scan?:
NO unless there is probable cause.

Do I think a private entity should be required to provide results of an automated scan to the government upon legal request?:
Yes, but only in the scope of conducting a regulatory periodic audit or in the event of an incident remediation effort. The entity should be required to use a specific baseline measure for the scan. Scan tool certification and evidence repudiation should also be part of the entire audit requirement specifications.
It sounds like a nice idea to require an "independent security audit", but it opens up a can of worms:
1) who is qualified to perform such audits,
2) how often must these audits occur,
3) how long (e.g., how many years?) should such audits be required,
4) what are the repercussions of failing an audit,
5) what are the repercussions of failing multiple audits,
6) who certifies the auditors,
7) who certifies the auditing tools,
8) what legal protections, if any, does submitting to these audits create.

Finally, if this is such a good (and more importantly, PRESUMED EFFECTIVE) idea ex post facto, then shouldn't all companies of some criteria (size, market, PII exposure) be required to submit to these audits as a preventative measure? After all, what's the point of having fire protection inspections only for those companies that have already had a fire?
Per this article, Anthem was audited by the OPM (federal govt.) for security in 2013:
http://www.csoonline.com/article/2893668/data-breach/anthem-accused-of-avoiding-further-embarrassment-by-refusing-audit.html

Audit results:
http://www.opm.gov/our-inspector-general/reports/2013/audit-of-information-systems-general-and-application-controls-at-wellpoint-inc-1a-10-00-13-012.pdf

There were some recommendations made in the audit. Whether any of these were pertinent to the actual breach which occurred, I do not know. In any case, having an audit did not prevent a breach. The point is: how are ex post facto audits going to be any more effective than this audit was?
There is no guarantee that any audit of any kind can prevent a compromise.
What an audit does is (hopefully) establish due diligence and due care practices are part of the security program.

It is hard to forget that in our litigation society that the concepts of due diligence and due care can become a safe harbor for entities that fall victim to a breach or theft.
If the entity is doing all it can to mitigate risk, and have evidence of that in the eyes of the court in the case the entity is being persecuted, the entity is protected. A scan result that has been validated by an auditor could be utilized in evidence in such cases.

The side benefit of scanning is to "know thyself" and verify that what you say you are doing is being accomplished at an acceptable level of risk.

An audit after an event can only re-validate that nothing has failed in the security control implementation since the last audit.
This "after action audit" can be viewed as a witch hunt.
But there is precedence for such measures.
If a food processing company is determined to be the cause of food poisoning, the facilities involved are often inspected to ensure that whatever caused the food poisoning has been remedied.
Does that mean the restaurant will not have another similar incident? No.
But the government (on behalf of the people) still demands it.

I see after action audits as a fact finding effort.
When I was in the military and we had a sniper shoot at us, we ALWAYS investigated where the shooter got a bead on us so that we can determine where we had further work to do in order to protect troops in the future. Sometimes we found the shooter was still in his perch!!
Though a bit off-topic it does align itself why this is happening. Anthem, United, BC or the victims of others are now in reactive mode since it is obvious they failed in proactive mode. Great, let’s give some deal based monitoring to look like the person with the white mask and fuzzy slippers.

Since Cyber attacks are the digital "Wild-Wild" West. Companies are struggling on the roles of departments. In my Utopian world, Security is king, router level down they are the gatekeeper. Not the case in >90%, the CIO or IT department with their ego is stone. Really? We are cleaning up your behavior and sadly personalities make it worse. You give a personality to something and now it becomes PC. <Sigh>

Notice the blending of roles, failure #1. If blending a security position with a failed reactive position, 100% failure. We also know, >90% of these problems are PEBKAC aka Problem Exists Between Keyboard And Chair ie users. They think the PC is theirs, like the one at home.

This is why I highly suggest and will/would implement a technology like Damballa @ each company I work for/with.(no, I am not a paid spokesperson) Strangely enough, when you ask users for their PIN or passwords that tracks their usage, they are offended, leave a poisoned USB stick, CD on a desk that will give you that data and 100% of the time it will be loaded. Alas, welcome to the age of ignorance.

Failure #2, where is the accountability for bad software? Examples, Flash, Java and ALL OS's claiming to fix the past which is haunting the present, reward the people with real money that find the exploits.

As we continue to deal with the digital Wild Wild West, history has shown it will only get worse. Will it going from selling to erasing who we were to selling us back into existence? Not a new concept by any means has been going on for thousands of years but is now 1's and 0's instead of ink on skin or scrolls.

Well, I now return you to the regularly scheduled post.

Diary Archives