On “Responsible Disclosure”: Stripping the Veil From Corporate Censorship

If you keep up with Microsoft’s Security Advisory releases (most recently Advisory #911302), you’ll note the following disturbingly typical portion:

Microsoft is concerned that this new report of a vulnerability in [insert product] was not disclosed responsibly, potentially putting computer users at risk. We continue to encourage responsible disclosure of vulnerabilities. We believe the commonly accepted practice of reporting vulnerabilities directly to a vendor serves everyone’s best interests. This practice helps to ensure that customers receive comprehensive, high-quality updates for security vulnerabilities without exposure to malicious attackers while the update is being developed.

Microsoft has included such wording in each and every one of its security advisories that is relevant to a public disclosure and will continue to do this for the foreseeable future. It is rapidly becoming evident that what Microsoft defines as “responsible” is “conforming to the company’s wishes”. The language, aside from being overtly hostile toward a number of talented and professional researchers, is a slap in the face dealt to real efforts for “responsible disclosure”. Microsoft’s public claims to a monopoly on the moral standard of “responsibility” not only cost the company a substantial amount of credibility within the community, but also harm the efforts of researchers who seek real reform in the vulnerability disclosure process.

In the case of 911302, the ‘report of a vulnerability’ Microsoft cites is information published by a British firm regarding the Window.OnLoad Race Condition in its Internet Explorer browser. The catch that Microsoft fails to mention? The vulnerability had already been reported publicly after Microsoft discounted it as a non-exploitable flaw. The lag time between the two reports also hurts Microsoft’s case: the issue has been known since May, and the code execution possibility was reported in November.

So, in the case of 911302, Microsoft is complaining because it failed to consider the possibility that a class of race conditions (those that reliably produce calls to free portions of the virtual address space) that has historically proven exploitable would prove equally dangerous in this instance. Microsoft failed to do its homework, and then chastised the British firm (ComputerTerrorism.com) for exposing the company’s gross negligence in its handling of this vulnerability.

While I think CT should have notified Microsoft, its reasons for not doing so are compelling. A large portion of the exploit vector was already publicly known — so much so that CT’s work had probably been accomplished by other malicious actors or was trivially achievable. The malicious members of the community had the same six months that Microsoft had to identify the exploitability of this flaw. As CT’s research illustrates, Microsoft’s disinterest in the flaw was not shared by the community. Therefore, Microsoft’s claims that CT was “irresponsible” (very explicit in its advisory) are brazen at best, flat out wrong at worst.

But Microsoft isn’t the only major corporate organization trying to muzzle researchers by way of public character assassination. Remember Michael Lynn, the researcher sued by Cisco for violating supposed industry standards of “responsible disclosure”? Lynn’s only crime was publishing an exploit for a long-fixed vulnerability in Cisco’s IOS after Cisco failed to acknowledge the hole in release materials for the relevant IOS update.

Remember SnoSoft? The group was threatened with legal action by Hewlett-Packard after exploit code for HP’s software leaked from its laboratories.

When these practices are criminalized, the meaning of “responsible disclosure” has clearly been coopted by corporate interests to mean “what is deemed acceptable by the affected vendor.”

To further illustrate this, I offer you a hypothetical scenario:

A vendor was informed of a vulnerability in its software in early August. The vulnerability was of exceptional severity, and yet the vendor failed to acknowledge this fact. Though a fix was planned, the vendor made no effort to coordinate the release of fixes for different affected products and would offer no immediate timeline for release. In February, 180 days later, the vulnerability is disclosed to the public, with fully-applicable workarounds, in the absence of a vendor-supported fix.

If that vendor were Microsoft, how many people can seriously doubt that we’d be seeing the same exact wording replicated in the advisory on that vulnerability?

The irony of this, of course, is that Microsoft, HP, Cisco, et al, are shooting themselves in the foot. All of those named would do well to give up the deluded vision that the world will soon return to a culture of non-disclosure, granting vendors indefinite timeframes and the absolute freedom to (mis)handle vulnerability information as they choose. History and today’s experience both tell us that trust in vendors on security issues is naive and misplaced.

Unfortunately, the insistence of vendors on using the term “responsible disclosure” as a tool of their hopeless agendas undermines what little hope any of them have to see real reform in the way vulnerability information is handled.

So, if the corporate agenda doesn’t qualify, what is responsible disclosure? What better source for a community standard than CERT. It’s one of the few bodies with some credibility in the research community that is generally respected by vendors. CERT sets a 45 day baseline to disclose vulnerability information. While this is, in practice, rather toothless, I wish CERT would stick to it, and I wish more members of the community would adopt this relatively moderate standard in a more rigid manner than CERT has done.

Using a community clearinghouse as the source of a semi-standard approach to “responsible disclosure” would force vendors to explain why they consider the disclosure policy of an industry leader “irresponsible”, undermine their legal claims and subject them to large amounts of bad press. Vendors who fail to acknowledge this policy as de facto standard could be handled mercilessly by both the community and the legal system, with clear basis in community standard.

In addition to debunking false vendor claims of “irresponsible disclosure”, this standard could also be used to establish community precedent that vendors have an obligation to promptly fix vulnerabilities. Any that choose instead to publicly demonize researchers should face a taste of their own medicine — in the form of lawsuits — for this slanderous conduct.

It is time that the vulnerability disclosure debate moved from special interests into the open community, because it is only then that we can hope for a standard of truly responsible disclosure that offers customers real protection and forces some degree of accountability upon commercial vendors for the effects of their ineffective security processes.

  • sunshine

    Quick comment on the first paragraph only…
    This is not the first time I’ve seen what you describe being portrayed as “responsible” or even “ethical” to ‘legal” to first talk to a certain company, and on their own terms. I am not referring to Microsoft here.

    This is common practice,.. Microsoft is just being overt about it and probably see it as being legitimate, as after all.. they are big enough to make the standard in most fields.

    The big vendors in the security industry act the same way. Most big vendors in the security industry are extremely.. extremely hostile competitors in every aspect and are as overt in their public statements.. even if vague.

  • Pingback: 33ad/blog

  • http://www.BeyondSecurity.com aviram

    There’s a short thread about it on funsec:

  • Pingback: Josh's Blog

  • SteveChristey

    Hi Matt,

    Add Litchfield/Sybase and Auriemma/Epic to your list, plus Kornbrust/Oracle (the latter being indirectly referenced in a Mary Ann Davidson eWeek commentary, if I recall correctly.) Someone somewhere needs to maintain a list of legal threats against researchers (hint hint to any readers out there).

    One notable change in recent disclosure policies is that the original Christey/Wysopal IETF draft, and RFPolicy preceding it, emphasized allowances for researchers to release details after a particular time frame if the vendor was not sufficiently responsive. Both policies advocated 30 days, with allowances.

    That is missing from later proposals. In hindsight I wish that “responsible” was not the term we used, because it was too loaded. But it is definitely clear that researchers have not a voice in providing an alternate view.

    - Steve

  • sunshine

    I think what some of the comments turn this into is yet another FD discussion. What bugged me in particular is how some vendors use responsible disclosure, which can be a Good Thing,, to pressure researchers by suggesting they are not being RESPONSIBLE if they don’t follow that company’s rules.

  • Matthew Murphy

    sunshine, you hit the nail on the head.

    i don’t have a problem with vendors who disagree with me on the most effective means of vulnerability handling. however, i take issue with vendors who call me “irresponsible” because i have a disclosure policy they disagree with. most of them even do this without any evidence to support their claim that my style of disclosure is any more risky. some pin their arguments to “community standards” that simply don’t exist.

  • Pingback: Emergent Chaos