Defining “Authorized”

I read an interesting post on Ido Kanner’s blog about the Egilman civil case. Egilman sued an individual after that individual accessed his web site using credentials of another user.

Rather than bringing his case under Title 18, Section 1030 (which governs “unauthorized access to a protected computer system”), Egilman chose to file his case under the Digital Millennium Copyright Act (DMCA) as an anti-circumvention violation. Egilman’s claim was that using a password without permission from the site owner amounted to “circumvention of a technological measure that effectively controls access to a work protected under this title [DMCA].”

The judge reviewing the case, of course, threw it out, finding no indication that an intent to circumvent existed. Rather than circumventing the protection, the defendant was simply complying with it. Egilman’s decision to pursue the case in this manner is indeed puzzling until one looks at the statute involved.

Title 18, Section 1030, offers three potential points of prosecution that would’ve been relevant to Egilman. Any person who commits any of the following actions is guilty of a felony under Section 1030:

(2) intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains—

[...]

(C) information from any protected computer if the conduct involved an interstate or foreign communication;

[...]

(4) knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value, unless the object of the fraud and the thing obtained consists only of the use of the computer and the value of such use is not more than $5,000 in any 1-year period;

[...]

(6) knowingly and with intent to defraud traffics (as defined in section 1029) in any password or similar information through which a computer may be accessed without authorization, if—
(A) such trafficking affects interstate or foreign commerce;

[...]

Given the federal court’s jurisdiction over this issue, Egilman could reasonably have convinced a judge that the defendant obtained information from a protected computer without authorization in violation of paragraph 2, or that the defendant obtained something of value without authorization in violation of paragraph 4. A less-straightforward, but still plausible case could’ve been made for illegal trafficking of a password in violation of paragraph 6.

Instead, Egilman chose to label the misuse of the password to be circumvention of a protective measure intended to protect copyrighted works shielded from public access by the site’s simple password authentication system. Though the merits of password authentication are another debate for another day, the question I was asking at this point is why in the world Egilman chose to pursue the crime as a DMCA violation?

In this case, it appears Egilman chose this avenue of prosecution because the malicious user was actually authorized for the purposes of Section 1030.

For many sites, a mere username and password pairing authorizes you to access protected portions of a site’s content. Some blog hosts, for instance, require nothing more than a valid e-mail address to setup an account, after which a simple username and password suffices for access to that account. Many content providers include no mention (not even in their lengthy Terms of Use agreements, that nobody reads but me) that using an account you did not create is an unauthorized use of the services that site provides.

In such cases, unauthorized means of obtaining a password (exploitation of software flaws, brute-force cracking attempts, etc.) are obviously illegal under Section 1030. The more murky legal territory surrounds cases where an attacker possesses a valid (authorized) set of credentials via some other means, in spite of not being the authorized user. This could even include cases where the attacker was informed of the credentials by a user who had obtained them illegally. This is true because Section 1030 requires an attacker to “intentionally access a computer without authorization or exceed authorized access” or to “knowingly access a protected computer without authorization” before a crime has been committed. Computer crime laws in most other nations have similar standards of criminal conduct (i.e., the prosecuting plaintiff must prove intent).

In the case of someone who had illegally acquired a password revealing it to an attacker-to-be, the leaker would face conviction under paragraph six (language that is, again, modeled in most of the developed world), but the attacker who used the stolen password could conceivably argue ignorance by claiming that he/she had no idea the access was unauthorized.

Further, a defendant charged under paragraph six could make a compelling argument that because accessing an account created by another user is not unauthorized according to the TOU (provided the credentials are otherwise lawfully obtained — an exercise to the reader) a crime has not been committed.

As a security professional, I understand that access to be unauthorized, as do most in this field. However, the legal system doesn’t provide the grounds to prosecute an offender based solely on that assertion. That means a user who willingly reveals credentials may expose himself/herself to damage and you to lost hours, without leaving you any legal recourse. In a world where people still cough up the goods to random strangers in return for candy bars and coffee, that’s an unacceptably high risk.

But don’t panic… the legal system doesn’t force you to accept the costs of moronic users. It only offers you the opportunity to do so if you don’t cover all your bases. The solution to this potential legal pitfall (and the way to avoid being caught in Egilman’s situation) is to ensure that all users who could potentially be asked to authenticate themselves are aware that using credentials to log in is a testimony by the user to be the owner of the account they correspond to as well as the credentials themselves. It won’t deter criminals, just make them easier to nab if they strike.

At the very least, Terms of Use agreements should be updated to include terms similar to the following:

You agree that you will not disclose your [insert site] account name or password to anyone under any circumstances. You agree to notify [insert site] as expeditiously as possible if you believe that your account details have been compromised. Willful disclosure of account information to a third party may result in the termination of your account at our discretion.

Use of [insert site] user identities not created by you for your personal use is not authorized by [insert site] and is a violation of these terms of use.

This absolves sites of the responsibility to deal with passwords that have been disclosed voluntarily (stolen passwords are another story) by defining that to be prohibited conduct in violation of the TOU. Further, a TOU agreement amended in this fashion also defines use of another user’s credentials to be a violation of the TOU, and specifically unauthorized.

Problem solved, right? Wrong.

Most providers only require a TOU to be read as a precondition of creating an account, with the assumption being that creating an account is a prerequisite to utilizing services. This perceived dependency, in reality, may not exist in a case such as this. Therefore, concern could arise as to whether the TOU is binding upon a person who logs in with another user’s credentials, as this person was never asked to read the TOU.

The solution to this problem? Require agreement to the TOU to log in. This can be in the form of a checkbox, text in the realm used for HTTP authentication, or say… a line or two of text between the input fields and the submit button on a login form:

Logging into this site indicates your agreement to use the services provided according to our terms of use. For more information, please read the agreement [link].

Finally… problem solved. For today. Legal issues are boring, and I’m no superstar lawyer, but not addressing this one could lead to pain down the road… even for non-legal folk.

Share

The Evil of Silent Patches: Microsoft’s Three-Year-Old Hole

When I was reading an iDefense advisory on a vulnerability in Trend Micro’s ServerProtect Console, there was a feeling that I just couldn’t shake. For some odd reason, the vulnerability felt oddly like deja vu. I kept thinking to myself:

I’ve seen this before.

While reading the “Vendor Response” section of the advisory, I realized exactly where that feeling came from. The chunked encoding issue iDefense reported was, in-fact an exploit vector for the Microsoft Foundation Classes (MFC) vulnerability I reported in July 2002.

As bad as Microsoft’s security processes still are, they were ten times worse back then. I’ll admit that. When I reported the vulnerability to Microsoft, a case on the issue wasn’t even opened. It wasn’t until I based public criticism of MSRC many months later on the still-unfixed vulnerability, that they re-opened the bug. Finally, Visual Studio 6.0 Service Pack 6 closed the issue in mid-2004. This timeline alone is not anywhere near defensible, particularly for an exploitable heap overflow. But if you think that’s bad… hold onto your chair.

When SP6 was about to be released, I was contacted by someone from MSRC and personally asked to announce the availability of the fix to the discussion lists where I had originally publicized the vulnerability. To this day, I still have not done that. The urge to become a wonderboy PR-piece for the world’s largest corporate empire, it turns out, is not that hard to resist.

The only reason I refused to do this, was because I expected word of the vulnerability fix to reach developers the same way news of other fixes did: from Microsoft. So, of course I was horrified to find that the Visual Studio Service Pack 6 release documentation made absolutely no mention of the vulnerability. Soon, the real reason why I was asked to deliver news of the fix became obvious: Microsoft was protecting its own hide rather than telling customers about the bug. The justification they offered me was even more damning: Microsoft couldn’t possibly hope to reach the developers of vulnerable code, so they shouldn’t place customers at risk by publicly announcing the security fix. It then followed, that because I, of course, would only announce the fix to the same fora where I announced the vulnerability, that the risk to customers could not be further increased as a result.

Does that make sense to anyone else? Didn’t think so… but I was just checking.

So, given the complete lack of any effort on Microsoft’s part to inform even its largest customers that they and their software were affected by a serious security hole, I can’t say I’m surprised by this week’s developments. Now, in December 2005, more than three years after this vulnerability was discovered (and reported publicly), a major manufacturer of security software has been found vulnerable to an attack vector that should have been dealt with better than a year ago… simply by compiling new binaries and distributing them to customers.

While this is an extreme example, it goes to show just how important information is in shaping response. Security fixes cannot simply be made and forgotten: they need effective distribution and uptake to have real-world impact. It wasn’t the lack of a fix, but poor distribution and uptake of a security patch that enabled Blaster to infect 25 million Windows PCs, and bring entire networks to their knees. And… what better way is there to cripple uptake rates than to simply not inform users of the availability of a fix?

We can only hope Microsoft has learned its lesson… if approaches like this remain the norm there, we’re going to be dealing with more viruses, more sleepless nights, and more pounding headaches for quite a while before they ever get it right.

Share

Possible FastClick Malware (UPDATED)

Another so-called “content provider” appears to be using malicious code to spread its advertising. I’ve confirmed that code currently hosted on FastClick.Net (curiously, by FastClick.com, Inc.) bypasses several popular pop-up blockers, and initial evidence indicates that there may be malicious code contained within these scripts. More details as they become available.

For now, I’d encourage all users to block FastClick.com and FastClick.net via HOSTS, IP filtering, or other counter-measures, to avoid the privacy-violating scumware.

UPDATE

My investigation of the FastClick malware would seem to indicate that my suspicion was slightly overblown. It is certainly malicious — the malware detects and circumvents several different pop-up blocking mechanisms. However, it is not readily obvious that users face any threat (beyond annoyance) from this piece of code.

The code seems to get around the pop-up blocking of various applications by carefully interweaving parent/child object relationships and certain input events. In the case of Internet Explorer, however, the code is considerably more aggressive. It invokes four COM objects, presumably in an attempt to dodge pop-up blocking applications.

The four CLSIDs used by this nuisance code are as follows:

Microsoft DHTML Edit Control
{2D360201-FFF5-11D1-8D03-00A0C959BC0A}

Google Toolbar
{00EF2092-6AC5-47c0-BD25-CF2D5D657FEB}

And finally, two unidentified classes that initial investigation suggests are tied to Microsoft Office:

{D2BD7935-05FC-11D2-9059-00C04FD7A1BD}
{9E30754B-29A9-41CE-8892-70E9E07D15DC}

The Google Toolbar control is invoked as a test, because the script’s behavior varies slightly when the toolbar is detected. The DHTML Edit Control is one method apparently used to bypass Internet Explorer’s pop-up blocking. This is presumably the purpose of the latter two controls as well.

I’d like to reiterate at this time that there’s no indication the software is overtly malicious… only that it is a pest. Users concerned about the unwanted pop-ups can block FastClick’s code by using the following line in a HOSTS file:

127.0.0.1 media.fastclick.net

For the most certain security, I’d recommend that all requests to the fastclick.com and fastclick.net domains be blocked.

Share

Using Architecture to Avoid Dumb Mistakes

The security report against SunnComm’s MediaMax that I document in my previous blog post seems so amazingly simple: world-writable file permissions. Replace a file and wait for it to be run by an administrative user, then you have total control. Attacks don’t get much more basic, or much more obvious, than that. SunnComm’s example, however, is one of many that illustrate the fact that access control is poorly-understood by developers. Major software firms (including, on occasion, Microsoft itself) have misconfigured access control so badly as to make their products practically give away elevated privileges. Something about this picture has got to change.

SunnComm’s mistake may seem obvious, and it certainly does demonstrate a lack of understanding of multi-user security, but their error is strikingly common:

    C:\Program Files\SomeDeveloper\SomeApp>cacls someapp.exe
    C:\Program Files\SomeDeveloper\SomeApp\SomeApp.exe Everyone:F

I’ve obviously edited this output, but I’ve done so to avoid naming a widely-deployed software application as having a Full Control permission for the Everyone group on all of its files. This blatant disregard for user security may seem pointless, and indeed it is wholly irresponsible. However, it is often justified by concerns such as allowing less-privileged users to install software updates. The problem is, most companies just don’t get that software update deployment is an administrative task for a reason. Namely, that it is simply not possible to trust a user to deploy an update that won’t damage the system.

There are several tasks that applications and system components simply should not trust limited, untrusted users to do. Hence the purpose of the term “untrusted user”. Access control is one of many areas where some actions are such huge, clear-cut mistakes that even attempting them should immediately subject an application to question. Placing ACLs that allow all users to modify objects (essentially unsecuring a secured object) is one of them. I’ve seen many applications try to allow Everyone, world, or equivalent roles modification rights to an object, and I’ve never seen one of them do it securely. Not even Microsoft could get that right, and had to release MS02-064 to advise users to plug a hole in Windows 2000 that was caused by an access control error of its own.

In spite of common knowledge that some security-related actions are potentially dangerous and almost never desired, it is still shockingly easy for an application developer to expose himself/herself to an attack. Windows Vista and Longhorn Server contain API “improvements”, presumably modeled after the simplicity of the access control APIs for Unix-like systems, that will make stripping your system’s security to the bone even easier.

So where are the countermeasures? Though there’s no way to make a system idiot-proof, shouldn’t systems developers work to make being an idiot a little less simple? OpenBSD is notorious for this approach of “secure by default” that ships the user a hardened system. Forcing users to open holes, rather than close them, encourages understanding of the risks involved as well.

A similar approach should be taken to application-level errors. It’s entirely sensible for an Operating System to block obviously dangerous activity. FormatGuard for Immunix is one example of this. That project was amazingly successful — blocking almost 100% of format string attacks with a near-zero false positive rate. Why? Because it blocked the exploitation of an obvious programming error.

This preemptive defense model has a lot of promise, and could just as easily be applied to other security cases. Suspicious behavior like setting world-writable ACLs on critical objects should raise immediate alarms, and systems developers could do quite a bit to facilitate this. Imagine being a developer of an application like MediaMax, when systems begin to trigger on the insecure ACL behavior and display warnings such as:

WARNING: This application is attempting a potentially-dangerous action. Allowing it to continue may expose your system to compromise by malicious users, viruses, or other threats. Are you sure you want to allow this application to continue?

Now, there will be folks that answer ‘Yes’, but the concern prompted by such a warning on the part of others would more than likely force a redesign on the part of SunnComm and vendors like them. In the ideal case, such a warning would expose the vulnerability before the software ever left the lab, rather than months later, when 20 million CDs with insecure code on them have been sold worldwide.

For something like this to be possible, the old notion of application and system as distinct components will have to be abandoned, in favor of a development concept that recognizes the reality that application and system code are dependent upon one another for functionality, including security. Applications should be architected to avoid potential system holes, and systems should be designed with the goal of making it more difficult to create holes in applications.

Unfortunately, both simplicity and complexity contribute, at least in excess, to a lack of security at other levels. Sometimes, a stop-gap measure is necessary to prevent slightly clueless folk from becoming a major risk.

Share

(More) Security Issues With Sony BMG CDs

A matter of weeks after a recall program for Sony BMG’s “rootkit” XCP technology was put into place, security holes have been found in another protection scheme used by the company.

Reportedly, SunnComm’s MediaMax (the system the more invasive XCP was due to replace) installs binaries on the system with insecure file permissions that let local users gain privilege on systems with MediaMax installed.

The vulnerability was outlined in a report published by the Electronic Frontier Foundation (EFF) as part of its class-action lawsuit against Sony BMG, which seeks damages for consumer complaints regarding MediaMax, as well as the more controversial XCP.

Sony BMG were already in one wicked mess over XCP, with the State of Texas seeking damages against the company of $100,000 for each XCP-infected system. Now, reports of vulnerabilities in MediaMax may be used as ammunition to further consumer complaints against that controversial system as well.

Share

Information Concerning Reported FireFox Vulnerability

A recent PacketStorm article reproduced by SecuriTeam indicates that a vulnerability has been found in the Browsing History code of Mozilla Firefox. Initial investigation confirms that FireFox 1.5 on Windows is not affected, and it appears that the report may be false.

Peter Laborge of SecurityFocus has also written a “news brief” on this vulnerability. It appears at this time that SecurityFocus is spreading inaccurate information and contributing to overblown media reporting on the issue.

Testing of the PoC code on Mozilla Firefox 1.5 with Windows XP Service Pack 2 causes no ill-effects. Contrary to the public claims, the browser runs normally. Startup is slowed considerably, but the browser does indeed function after some delay. Deleting history links will clear the slight sluggishness that the supposed “exploit” causes. The problem will clear up naturally once the malicious link expires from the history, which seems to be 9 days in Firefox 1.5 by default.

Other posters have also reported that the browser operates normally, with only a delay in startup, after the attack is carried out. Users who are concerned about a few seconds of delay in Firefox’s startup can turn off the history — something many privacy-conscious users have already done — via the Options window in the “Privacy” section.

To reiterate… there is no evidence that a vulnerability exists in FireFox related to history processing at this time.

[EDIT: Mozilla has investigated this issue, and come to the same conclusion. Though there's some slowdown at startup, it's not a hang (the browser loads) and it's not a crash. The Mozilla advisory is available here.]

Share

On “Responsible Disclosure”: Stripping the Veil From Corporate Censorship

If you keep up with Microsoft’s Security Advisory releases (most recently Advisory #911302), you’ll note the following disturbingly typical portion:

Microsoft is concerned that this new report of a vulnerability in [insert product] was not disclosed responsibly, potentially putting computer users at risk. We continue to encourage responsible disclosure of vulnerabilities. We believe the commonly accepted practice of reporting vulnerabilities directly to a vendor serves everyone’s best interests. This practice helps to ensure that customers receive comprehensive, high-quality updates for security vulnerabilities without exposure to malicious attackers while the update is being developed.

Microsoft has included such wording in each and every one of its security advisories that is relevant to a public disclosure and will continue to do this for the foreseeable future. It is rapidly becoming evident that what Microsoft defines as “responsible” is “conforming to the company’s wishes”. The language, aside from being overtly hostile toward a number of talented and professional researchers, is a slap in the face dealt to real efforts for “responsible disclosure”. Microsoft’s public claims to a monopoly on the moral standard of “responsibility” not only cost the company a substantial amount of credibility within the community, but also harm the efforts of researchers who seek real reform in the vulnerability disclosure process.

In the case of 911302, the ‘report of a vulnerability’ Microsoft cites is information published by a British firm regarding the Window.OnLoad Race Condition in its Internet Explorer browser. The catch that Microsoft fails to mention? The vulnerability had already been reported publicly after Microsoft discounted it as a non-exploitable flaw. The lag time between the two reports also hurts Microsoft’s case: the issue has been known since May, and the code execution possibility was reported in November.

So, in the case of 911302, Microsoft is complaining because it failed to consider the possibility that a class of race conditions (those that reliably produce calls to free portions of the virtual address space) that has historically proven exploitable would prove equally dangerous in this instance. Microsoft failed to do its homework, and then chastised the British firm (ComputerTerrorism.com) for exposing the company’s gross negligence in its handling of this vulnerability.

While I think CT should have notified Microsoft, its reasons for not doing so are compelling. A large portion of the exploit vector was already publicly known — so much so that CT’s work had probably been accomplished by other malicious actors or was trivially achievable. The malicious members of the community had the same six months that Microsoft had to identify the exploitability of this flaw. As CT’s research illustrates, Microsoft’s disinterest in the flaw was not shared by the community. Therefore, Microsoft’s claims that CT was “irresponsible” (very explicit in its advisory) are brazen at best, flat out wrong at worst.

But Microsoft isn’t the only major corporate organization trying to muzzle researchers by way of public character assassination. Remember Michael Lynn, the researcher sued by Cisco for violating supposed industry standards of “responsible disclosure”? Lynn’s only crime was publishing an exploit for a long-fixed vulnerability in Cisco’s IOS after Cisco failed to acknowledge the hole in release materials for the relevant IOS update.

Remember SnoSoft? The group was threatened with legal action by Hewlett-Packard after exploit code for HP’s software leaked from its laboratories.

When these practices are criminalized, the meaning of “responsible disclosure” has clearly been coopted by corporate interests to mean “what is deemed acceptable by the affected vendor.”

To further illustrate this, I offer you a hypothetical scenario:

A vendor was informed of a vulnerability in its software in early August. The vulnerability was of exceptional severity, and yet the vendor failed to acknowledge this fact. Though a fix was planned, the vendor made no effort to coordinate the release of fixes for different affected products and would offer no immediate timeline for release. In February, 180 days later, the vulnerability is disclosed to the public, with fully-applicable workarounds, in the absence of a vendor-supported fix.

If that vendor were Microsoft, how many people can seriously doubt that we’d be seeing the same exact wording replicated in the advisory on that vulnerability?

The irony of this, of course, is that Microsoft, HP, Cisco, et al, are shooting themselves in the foot. All of those named would do well to give up the deluded vision that the world will soon return to a culture of non-disclosure, granting vendors indefinite timeframes and the absolute freedom to (mis)handle vulnerability information as they choose. History and today’s experience both tell us that trust in vendors on security issues is naive and misplaced.

Unfortunately, the insistence of vendors on using the term “responsible disclosure” as a tool of their hopeless agendas undermines what little hope any of them have to see real reform in the way vulnerability information is handled.

So, if the corporate agenda doesn’t qualify, what is responsible disclosure? What better source for a community standard than CERT. It’s one of the few bodies with some credibility in the research community that is generally respected by vendors. CERT sets a 45 day baseline to disclose vulnerability information. While this is, in practice, rather toothless, I wish CERT would stick to it, and I wish more members of the community would adopt this relatively moderate standard in a more rigid manner than CERT has done.

Using a community clearinghouse as the source of a semi-standard approach to “responsible disclosure” would force vendors to explain why they consider the disclosure policy of an industry leader “irresponsible”, undermine their legal claims and subject them to large amounts of bad press. Vendors who fail to acknowledge this policy as de facto standard could be handled mercilessly by both the community and the legal system, with clear basis in community standard.

In addition to debunking false vendor claims of “irresponsible disclosure”, this standard could also be used to establish community precedent that vendors have an obligation to promptly fix vulnerabilities. Any that choose instead to publicly demonize researchers should face a taste of their own medicine — in the form of lawsuits — for this slanderous conduct.

It is time that the vulnerability disclosure debate moved from special interests into the open community, because it is only then that we can hope for a standard of truly responsible disclosure that offers customers real protection and forces some degree of accountability upon commercial vendors for the effects of their ineffective security processes.

Share