Responsible Disclosure or Full Disclosure?
The Google Online Security Blog posted a brief article on their opinion the full vs responsible disclosure debate... likely in the wake of the controversy of one of their researchers publishing a security vulnerability. The debate on publishing security vulnerabilities has been and remains a hot one. Almost all vendors support "responsible disclosure" (a term that I absolutely detest) where a researcher discloses the bug only to the software vendor who then (hopefully) patches the bug. Full disclosure is publishing the vulnerability publicly once it is discovered (or in some cases, once a PR firm has been hired to manage the hype).
There are pros and cons to both approaches. Responsible disclosure really only works when there is responsible software development. However, if the good guys have the vulnerability, the bad guys have it and at least 12 more. With the exception of the few vendors which buy vulnerabilities, responsible disclosure does not allow the security community to develop counter-measures to protect against the threat while a patch is being developed. For instance, it took about a week for software to be developed to detect the LNK vulnerability and there are still problems with it. On the other hand, full disclosure hands the details to the bad guys in public so they can immediately exploit the vulnerability. It does, however, get vendors and researchers to move quickly.
What are your thoughts on how disclosure should be handled?
--
John Bambenek
bambenek at gmail /dot/ com
Comments
--
http://blogs.technet.com/b/msrc/archive/2010/07/22/announcing-coordinated-vulnerability-disclosure.aspx
I would like to see a sensible meet-in-the-middle approach where the researchers you would call "the good guys" disclose to the vendor first and give a deadline when they will release full disclosure. This gives developers a lead-time to get on the patching cycle. If the developers find there are extenuating circumstances and that full disclosure is likely to cause significant disruptions, they should be able to request a reasonable delay and "the good guys" should honor it (but it should only be a temporary postponement.)
JW
Jul 27th 2010
1 decade ago
DrKewp
Jul 27th 2010
1 decade ago
shinnai
Jul 28th 2010
1 decade ago
Quoting from "The Purpose of this Policy":
This policy exists to establish a guideline for interaction between a researcher and software maintainer. It serves to quash assumptions and clearly define intentions, so that both parties may immediately and effectively gauge the problem, produce a solution, and disclose the vulnerability.
See also the 2002 IETF draft (expired) Responsible Vulnerability Disclosure Process at https://datatracker.ietf.org/doc/draft-christey-wysopal-vuln-disclosure/ . Especially pay attention to Section 5, and the community commentary references within.
The arbitrary upper bound suggested by Google may not be desirable or helpful. Google's post misses rfp's essential point that contact by a reseacher is the opening of a conversation about a unique set of circumstances. RFPolicy's goal is a framework to conduct frequent and honest communication between the parties that recognizes the good faith of the researcher and the complexities that may be faced by the vendor.
The frustrating thing about Google's post is that it raises no new issues. The community has been down this road before. Google's post, despite citing Bruce Schneier's excellent 2001 Crypto-Gram, reflects an ignorance of prior quality work on this topic.
TomS.
Jul 28th 2010
1 decade ago
- http://www.secureworks.com/research/disclosure.html
.
PC Tech
Jul 28th 2010
1 decade ago
No Love.
Jul 28th 2010
1 decade ago
Skyhawk45
Jul 28th 2010
1 decade ago
Dana Taylor
Jul 28th 2010
1 decade ago
If, as in Shinnai's scenario, the company ignores or threatens you. I do not agree that full disclosure is the answer. Shinnai even states "sometimes they patch, sometimes no".
I say in this case, disclose the bug to the security community and let them make their various patches for it. if you disclose publicly, the software company is STILL NOT GOING TO PATCH if they threatened you in the first place...and now the bug is out there and there will be no patch coming. keep it quiet and fix it yourself.
now this is where the whistleblower part comes in. the company who threatened the researcher or security expert should absolutely be held responsible in some way for their *irresponsible* approach to the bug.
husaragi
Jul 28th 2010
1 decade ago
What should be done, is the vulnerability should be submitted to CERT or a similar entity, who would then contact the vendor to initiate a fix, and make a vague public statement that there IS a vulnerability. This would let the public know immediately that something should be done, and force the vendor to fix the problem.
If the problem is not fixed within a reasonable time, the vendor should be held responsible for notifying ALL of it's customers who have registered the product of the situation, where the customer would then be allowed to sell back the software if no fix is available.
Watch how fast things get fixed, and how fast you get real software writers again. It is true, the cost of software would go up a bit, but the world would be a lot safer.
A lot of the software written today is done by inexperienced writers. You get what you pay for, but it is still not right! I worked on Sendmail a loooong time ago, so I am fully aware bugs exist and need to be squashed. We had a few :-). Yes I am a dinosaur, but still loving it!
Just my two cents.. -Al
Al - Your Data Center
Jul 28th 2010
1 decade ago