Using Architecture to Avoid Dumb Mistakes

The security report against SunnComm’s MediaMax that I document in my previous blog post seems so amazingly simple: world-writable file permissions. Replace a file and wait for it to be run by an administrative user, then you have total control. Attacks don’t get much more basic, or much more obvious, than that. SunnComm’s example, however, is one of many that illustrate the fact that access control is poorly-understood by developers. Major software firms (including, on occasion, Microsoft itself) have misconfigured access control so badly as to make their products practically give away elevated privileges. Something about this picture has got to change.

SunnComm’s mistake may seem obvious, and it certainly does demonstrate a lack of understanding of multi-user security, but their error is strikingly common:

    C:\Program Files\SomeDeveloper\SomeApp>cacls someapp.exe
    C:\Program Files\SomeDeveloper\SomeApp\SomeApp.exe Everyone:F

I’ve obviously edited this output, but I’ve done so to avoid naming a widely-deployed software application as having a Full Control permission for the Everyone group on all of its files. This blatant disregard for user security may seem pointless, and indeed it is wholly irresponsible. However, it is often justified by concerns such as allowing less-privileged users to install software updates. The problem is, most companies just don’t get that software update deployment is an administrative task for a reason. Namely, that it is simply not possible to trust a user to deploy an update that won’t damage the system.

There are several tasks that applications and system components simply should not trust limited, untrusted users to do. Hence the purpose of the term “untrusted user”. Access control is one of many areas where some actions are such huge, clear-cut mistakes that even attempting them should immediately subject an application to question. Placing ACLs that allow all users to modify objects (essentially unsecuring a secured object) is one of them. I’ve seen many applications try to allow Everyone, world, or equivalent roles modification rights to an object, and I’ve never seen one of them do it securely. Not even Microsoft could get that right, and had to release MS02-064 to advise users to plug a hole in Windows 2000 that was caused by an access control error of its own.

In spite of common knowledge that some security-related actions are potentially dangerous and almost never desired, it is still shockingly easy for an application developer to expose himself/herself to an attack. Windows Vista and Longhorn Server contain API “improvements”, presumably modeled after the simplicity of the access control APIs for Unix-like systems, that will make stripping your system’s security to the bone even easier.

So where are the countermeasures? Though there’s no way to make a system idiot-proof, shouldn’t systems developers work to make being an idiot a little less simple? OpenBSD is notorious for this approach of “secure by default” that ships the user a hardened system. Forcing users to open holes, rather than close them, encourages understanding of the risks involved as well.

A similar approach should be taken to application-level errors. It’s entirely sensible for an Operating System to block obviously dangerous activity. FormatGuard for Immunix is one example of this. That project was amazingly successful — blocking almost 100% of format string attacks with a near-zero false positive rate. Why? Because it blocked the exploitation of an obvious programming error.

This preemptive defense model has a lot of promise, and could just as easily be applied to other security cases. Suspicious behavior like setting world-writable ACLs on critical objects should raise immediate alarms, and systems developers could do quite a bit to facilitate this. Imagine being a developer of an application like MediaMax, when systems begin to trigger on the insecure ACL behavior and display warnings such as:

WARNING: This application is attempting a potentially-dangerous action. Allowing it to continue may expose your system to compromise by malicious users, viruses, or other threats. Are you sure you want to allow this application to continue?

Now, there will be folks that answer ‘Yes’, but the concern prompted by such a warning on the part of others would more than likely force a redesign on the part of SunnComm and vendors like them. In the ideal case, such a warning would expose the vulnerability before the software ever left the lab, rather than months later, when 20 million CDs with insecure code on them have been sold worldwide.

For something like this to be possible, the old notion of application and system as distinct components will have to be abandoned, in favor of a development concept that recognizes the reality that application and system code are dependent upon one another for functionality, including security. Applications should be architected to avoid potential system holes, and systems should be designed with the goal of making it more difficult to create holes in applications.

Unfortunately, both simplicity and complexity contribute, at least in excess, to a lack of security at other levels. Sometimes, a stop-gap measure is necessary to prevent slightly clueless folk from becoming a major risk.

Share
  • sunshine

    How much on everybody clicking “next-next”? Especially security professionals?

  • http://blogs.securiteam.com/index.php/archives/author/mattmurphy/ Matthew Murphy

    As I document in my post, nothing can solve a click-happy user.

    If you make it distinct enough (i.e. make the prompt a top-level window with a unique icon in the GUI case) and maybe even prevent users from blindly clicking through it (as Firefox does with its security warnings already), you might convince people to actually read the dialog.

    The other thing that this involves is that isn’t about protecting individual systems as much as it is alerting people that code is doing something it shouldn’t be doing. If a dev writes his fancy new installer that sets an ACL like this (God forbid) and then gets this prompt, they’re going to have to investigate why. That is, of course… assuming companies actually do QA. That isn’t always true. :-)

  • sunshine

    Allow me to explain my thoughts…
    Much like with secrecy, user-warnings are very efficient. However, much like with secrecy, once over-done it tends to be:
    1. Ridiculed.
    2. Down-played.
    3. Ignored.
    (not necessarily in that order)

    If a user has to “accept”, “agree” or click-through more than once window a year, he/she is not going to pay attention and just click yes-yes-yes.

    A user doesn’t care and should not care about problems. All the user cares about is usability.

    Burden the user with decisions and he will find a way not to make them. Aside to that, never forget -
    users LOVE clicking.

    History, the future and phishing prove me right.

  • http://BeyondSecurity.com ido

    here something i wrote on october regarding more or less same situation sunshine is talking about:
    http://blogs.securiteam.com/?p=106

  • http://www.BeyondSecurity.com aviram

    i don’t know if this example proves matt’s point or sunshine’s, but my favorite virus is the one that sent itself in an encrypted archive, with the password embedded in an image inside the mail.
    a user that got the virus had to:

    1. double click the attachment
    2. when prompted for a password, go search for it in the email
    3. enter the password (correctly. it was rather complicated so that the av won’t brute-force it)
    4. open the archive and double click on the executable inside
    5. get infected (actually, that happened automatically :-) )

    it seems there are enough users out there that are smart enough to extract and execute password protected archives, but still stupid enough to go ahead and do that despite of a thousand virus warnings. i guess

  • http://blogs.securiteam.com/index.php/archives/author/mattmurphy/ Matthew Murphy

    Some users are REALLY that stupid. Truth-be-told, I tend to regard such users as lost causes.

    However, there are the few users who would care. Alternatively, I suppose this particular case doesn’t even require a warning. The system could simply block the dangerous ACL from ever being set.

    The only people I really see a warning helping are the devs who write the insecure code. Maybe what we need, then, is not a global warning, but a reject-and-log approach (i.e., store an Event Log message explaining the rejection of the ACL), perhaps supplemented by extra explanatory information for developers only. A developer would be warned when his/her code attempted such an insecure action, while an ordinary user would simply see the request dropped without their interaction.