Responsible Disclosure or Full Disclosure?

Published: 2010-07-27
Last Updated: 2010-07-27 21:27:20 UTC
by John Bambenek (Version: 1)
20 comment(s)

The Google Online Security Blog posted a brief article on their opinion the full vs responsible disclosure debate... likely in the wake of the controversy of one of their researchers publishing a security vulnerability.  The debate on publishing security vulnerabilities has been and remains a hot one.  Almost all vendors support "responsible disclosure" (a term that I absolutely detest) where a researcher discloses the bug only to the software vendor who then (hopefully) patches the bug.  Full disclosure is publishing the vulnerability publicly once it is discovered (or in some cases, once a PR firm has been hired to manage the hype).

There are pros and cons to both approaches.  Responsible disclosure really only works when there is responsible software development.  However, if the good guys have the vulnerability, the bad guys have it and at least 12 more.  With the exception of the few vendors which buy vulnerabilities, responsible disclosure does not allow the security community to develop counter-measures to protect against the threat while a patch is being developed.  For instance, it took about a week for software to be developed to detect the LNK vulnerability and there are still problems with it.  On the other hand, full disclosure hands the details to the bad guys in public so they can immediately exploit the vulnerability.  It does, however, get vendors and researchers to move quickly.

What are your thoughts on how disclosure should be handled? 

--
John Bambenek
bambenek at gmail /dot/ com

Keywords:
20 comment(s)

Comments

Related to the term, Microsoft's MSRC has introduced a new term: Coordinated Vulnerability Disclosure
--
http://blogs.technet.com/b/msrc/archive/2010/07/22/announcing-coordinated-vulnerability-disclosure.aspx


I would like to see a sensible meet-in-the-middle approach where the researchers you would call "the good guys" disclose to the vendor first and give a deadline when they will release full disclosure. This gives developers a lead-time to get on the patching cycle. If the developers find there are extenuating circumstances and that full disclosure is likely to cause significant disruptions, they should be able to request a reasonable delay and "the good guys" should honor it (but it should only be a temporary postponement.)
There really needs to be some 'whistleblower' type legislation to address this; including bounties.

My personal experience is that most vendors, when you "responsibly disclose" a bug, ignore or threaten you. In these cases, I choose to be "irresponsibly" and I opt for a full disclosure, releasing all informations about the vulnerability and a working exploit. Sometimes they patch, sometimes no...
See rain forest puppy's October 2000 RFPolicy 2.0 at http://www.wiretrip.net/rfp/policy.html
Quoting from "The Purpose of this Policy":
This policy exists to establish a guideline for interaction between a researcher and software maintainer. It serves to quash assumptions and clearly define intentions, so that both parties may immediately and effectively gauge the problem, produce a solution, and disclose the vulnerability.

See also the 2002 IETF draft (expired) Responsible Vulnerability Disclosure Process at https://datatracker.ietf.org/doc/draft-christey-wysopal-vuln-disclosure/ . Especially pay attention to Section 5, and the community commentary references within.

The arbitrary upper bound suggested by Google may not be desirable or helpful. Google's post misses rfp's essential point that contact by a reseacher is the opening of a conversation about a unique set of circumstances. RFPolicy's goal is a framework to conduct frequent and honest communication between the parties that recognizes the good faith of the researcher and the complexities that may be faced by the vendor.

The frustrating thing about Google's post is that it raises no new issues. The community has been down this road before. Google's post, despite citing Bruce Schneier's excellent 2001 Crypto-Gram, reflects an ignorance of prior quality work on this topic.
Best practice: Lead by example and policy:
- http://www.secureworks.com/research/disclosure.html
.
I have to agree with Shinnai. I've been burned too many times trying to be helpful, and it's just not worth the pain or the professional trouble. I post the vulnerability and the exploit, and if I can work up a patch or develop a mitigating measure for a vulnerability I post that, too.
I share Shinnai and No Love's sentiment. As an example on one occasion, a big software developer, who had 3 previous vulnerabiltities published, refused to fix a 4th identified by a pen test. Not until I got the vendor on the phone with the pen tester and told the latter to publish the vulnerability that any progress was made. There seems to be a built in arrogance nowadays on the parts of vendors who either don't care or view fixes as inconvienient. I'm in the middle of yet another situation right now where the same issue has popped up again. Disclose!
I have mixed feelings about this subject. Full Disclosure does make some vendors take vulnerabilities more seriously. They work faster to get a fix out for it. But on the other hand, I work in an EDU environment that is decentralized. It is VERY difficult to get systems patched or workarounds implemented quickly. Your average user at home is more at risk than say a large company that is using centralized management.
I am with JW. somewhere in the middle with "Here is the bug. If you do not patch by xyz date, I will disclose publicly.

If, as in Shinnai's scenario, the company ignores or threatens you. I do not agree that full disclosure is the answer. Shinnai even states "sometimes they patch, sometimes no".

I say in this case, disclose the bug to the security community and let them make their various patches for it. if you disclose publicly, the software company is STILL NOT GOING TO PATCH if they threatened you in the first place...and now the bug is out there and there will be no patch coming. keep it quiet and fix it yourself.

now this is where the whistleblower part comes in. the company who threatened the researcher or security expert should absolutely be held responsible in some way for their *irresponsible* approach to the bug.
I believe the problem is that the researchers get placed in a spot where they must disclose eventually if they want to protect the online community. The system as it stands is totally flawed.

What should be done, is the vulnerability should be submitted to CERT or a similar entity, who would then contact the vendor to initiate a fix, and make a vague public statement that there IS a vulnerability. This would let the public know immediately that something should be done, and force the vendor to fix the problem.

If the problem is not fixed within a reasonable time, the vendor should be held responsible for notifying ALL of it's customers who have registered the product of the situation, where the customer would then be allowed to sell back the software if no fix is available.

Watch how fast things get fixed, and how fast you get real software writers again. It is true, the cost of software would go up a bit, but the world would be a lot safer.

A lot of the software written today is done by inexperienced writers. You get what you pay for, but it is still not right! I worked on Sendmail a loooong time ago, so I am fully aware bugs exist and need to be squashed. We had a few :-). Yes I am a dinosaur, but still loving it!

Just my two cents.. -Al

Diary Archives