Measuring Software Security and Naming Vulnerabilities
in a couple of recent bugtraq threads several of us discussed the following issues:
1. determining when a product is secure, and how to do that.
2. studying vulnerability history to see what it tells of a product.
3. the importance of the vulnerabilities complexity to that same determination, as well as to establish how professional the programmer is.
some issues that came out of that were:
1. how good is our current data?
2. how much better is it than it used to be?
3. how can better data be gathered?
as well as:
1. all over again, what should different vulnerabilities be called?
2. should their name/type reflect their severity?
3. should we stop using “remote” to describe browser vulnerabilities, client-side vulnerabilities, sql injections or user assisted ones?
and generally came to the conclusions that not all vulnerabilities are born equal, as well as that there is a lot of confusion resulting from not quite enough data being available. when it is available, it is often biased or measured by different standards.
that discussion was basically, as a friend would put it: “talking about the colour of bytes” and nothing came out of it, but it points a spot-light at burning issues in the security realm which bug us all but are not directly related to daily work.
we can still report vulnerabilities and we can still choose disclosure methods. we can also still choose what software we want to buy. it would be so much easier with the data we lack, though, won’t it?
i was very happy about this discussion taking place, as these things have been nagging at me for a while and critical mass is now gathering for something to perhaps happen.
i would like you guys to take a few minutes and think of these issues. as much as i like to talk (or write) i’d like to hear some opinions other than my own which is already formed.
here is a quote from an email i sent on that thread to get you started:
our history and statistics gathering in the industry are very lacking and self-biased at best. still, this really is the case of “the truth is probably somewhere in the middle”.
looking at how many past vulnerabilities were found, and of what types, does tell you something. looking at what the code looks like tells you a bunch. as an example, if a product has 5 basic buffer overflows every year, obviously something is wrong there. if the vulnerabilities are obscure at best rather than basic buffer overflows – we can at least tell it wasn’t the coder being bad but rather something that can happen to anyone.
looking even at web applications and their history one can easily tell if:
1. they are professionally written.
2. the vulnerabilities seen before and the ones we could find are not trivial or really say anything about the coder.
that’s how we chose wordpress for blogging.
i don’t see why closed-source software should be any different.
i recently had a chat with a friend and we talked exactly on this issue:
although there is some un-touched code-base around (excel being a recent example)…
looking at microsoft’s software of today, it is extremely well-written and professional. far beyond that of most others. finding vulnerabilities in them is extremely difficult. most vulnerabilities you will find will be logical in nature and not easy.
that does not come to speak to my (bad to worse) opinion of their disclosure handling process, etc., but rather to show that they indeed seriously changed in that regard.