Is security testing more “security” or more “testing?

A while ago I tried to start a discussion on the Daily Dave mailing list using the provocative subject line “Does Fuzzing really work?”.
I was hoping to start some fruitful information exchange. After all, I know there are many people out there that are either busy developing fuzzers or busy using them (we’re doing the former), and why not share some information and see what makes sense and what doesn’t?

Well, no real discussion came out of that post (I don’t count pissing contests as a meaningful discussion), which might mean that there are less ‘fuzzing’ people out there than I thought.

Two interesting dialogs did come out of that, though – both by private email replies.
One was an intriguing discussion with Robert Fly, that heads up a security team in Microsoft that works across a number of product groups. Robert described the security testing procedures and the fuzzing technology that is used in their testing. Let me sum it up by saying it was nothing short of amazing. Those guys seem to be on top of most (if not all) of the fuzzing technology improvements, but what’s more amazing is that they have a testing procedure in place, one that’s right out of the text book. Did I mention I was impressed?

The second was a discussion with Disco Johnny. DJ is a very knowledgeable guy, and he seems to come from the testing field and thus treats security testing as just another case of ‘regular’ testing.
He is trying to use common testing methodologies to find bugs that have the specific case of being security related. I come from vulnerabilities – for me testing for security holes in products is just an improvement to VA. Where VA scanning looks for known vulnerabilities, black box testing or fuzzing looks for unknown vulnerabilities.

The difference between those approaches is more than a nuance. When we look for vulnerabilities we look for certain classes of vulnerabilities – buffer overflow vulnerabilities, for example, are probably 90% or more of new vulnerabilities that are published. For most software vendors eliminating 90% of their vulnerabilities is great news, especially if those are the most exploitable ones.
On the ‘testing’ approach, we try to cover all “usage scenarios” (usually by testing code coverage) and find bugs there.
Taking the usage scenario direction does not take into consideration how popular a certain attack vector is, but on the other hand makes a better theoretical model. If someone discovers a new type of security flaw a-la format string – for example, that the character “^” leads to code execution – most fuzzers will totally miss out on that, since the recorded history (and common sense) does not have “^” as an attack vector. Does this make the testing approach better? It’s hard for me to say – I haven’t seen an actual tool or product that implements this theory. I haven’t even seen a product design or an explanation of a tool that can work that way. On the other hand, fuzzing tools can practically find vulnerabilities here and now, and nobody has discovered that slippery new attack vector that may theoretically make fuzzers obsolete.

Unfortunately I can’t give much information about my dialog with Robert just yet, but the Disco Johnny discussion is now alive on the fuzzing mailing list. If you want to dive into the discussion read DJ’s latest two posts; they take a bit long to read (and require some CS knowledge) but are very interesting nevertheless.

What’s your take on security product testing? Is it more like checking for vulnerabilities or more like checking for bugs? More “Security” or more “Testing”?

Share
  • XML exploit

    How come the XML vulnerability wasn’t found with browser-fuzzer HDMore built?