Exploiting our security models?

I’m sitting in CanSecWest.  We’ve just had a talk on platform-independent static binary code analysis.  (It isn’t really platform independent: just translating from specific instruction sets.  Not that it isn’t cool: REIL is a sort of RISC version of an assembly version of pseudocode.)  The presentation, and what they’ve done so far, is fairly abstract.  They are approaching the analysis with a type of Turing machine, and, with a sort of lattice-based state machine model, hoping that the transforms they can see with their model, are close enough to what the actual program will do in an actual machine in order to tell you if there is teh possibility of a bug or an exploit.

So, it’s kind of complex.  We are applying some highly abstract, theoretical stuff, pretty directly to the real world.

Now, in the abstract world, it’s been more than 25 years since Fred Cohen proved that this type of thing will never completely work.  Either you are going to get an infinite number of false positives (false alarms, where you spend time chasing down problems that aren’t problems), or an infinite number of false negatives (which is our current situation with security: our tools aren’t telling us about the problems that do exist), or both.

(One of the authors responded to this point that he chose to err on the side of false positives.  A reasonable position if you are doing research.)

However, this system is so complex that it got me thinking: they are hoping that the model and transforms they have put together is close enough to reality that it will give them useful results and help, but they really don’t know.  What if we are now to the point where our security tools and models, themselves, have gaps that can hide problems, and be exploited?

(There was a reason the original security models were so simple …)

Share