The sun will come out tomorrow
I remember when I was first introduced to DES. It was in some computer magazine whose name I can’t recall and it went something like “the DES algorithm is so powerful, that even if you could run several DES brute force attempts per second, the sun will die and our galaxy will be destroyed before you can try all the DES combinations. It made sense – 2^56 is a very big number, more than the measly 5-10 billion years our sun has to live. Back then there was also speculation on how the NSA could break it. It was a well-documented fact that the NSA made some subtle changes to the DES algorithm and the popular assumption was that they put in a ‘back door’ so that their supercomputer can break it. There had to be an NSA backdoor, since there were mathematical proofs on the impossibility of breaking DES in a reasonable time (like, within the age of the universe) or reasonable amount of money (lets say, within the entire worth of the world’s economy). Who can argue with a mathematical proof that contains a lot of exponents and relies on bullet proof analogies?
Almost a decade later I learned cryptology from Eli Biham, the inventor of differential cryptology. He spent a full lecture on the DES design and algorithm and we were all quite convinced that its 16 rounds and mysterious S-box design was unbreakable. Biham finished the lecture by saying “…and next week, I’ll tell you how DES is broken” and indeed the following week he taught us differential cryptanalysis. The method was unpractical and mostly theoretical, so it didn’t really “break” DES, but it showed the first weakness and I started losing faith in the whole “the world will end before…” jive.
It was only a few years after, that DES collapsed. It wasn’t with smart differential cryptology, though. It wasn’t even by finding the ‘secret NSA backdoor’ everybody was looking for in the 80s. In fact, many were shocked to discover the NSA change to the S-boxes actually made DES more resistance to differential cryptanalysis attacks. They didn’t want the algorithm to be weakened by other means, possibly because they could brute-force it way back then.
DES was broken because something unexpected happened. The processing power of a super computer from the 70s is weaker than the average PC sold at Walmart. In fact, a $500 PC running a standard operating system can try hundreds of thousands of DES combinations per second, while allowing its operator to play Solitaire. It’s not difficult to get hold of thousands or even tens of thousands of PCs (think a medium-size corporation after 5pm or a university during summer vacation) and you’ve got about a billion DES brute-force attempts per second. The sun will come up tomorrow, and the DES encrypted message will be broken by that time.
If I was to go back in time and tell a computer science professor that in 30 years an average person will have access to a processing power that is a billion times that of a super computer, I would be committed on the spot (or worse – sent to the social sciences department). Yes, I admit that it’s hard to anticipate something like that – keeping with the flawed analogies that would be like me telling you that in 30 years we’ll all be living in mansions like Bill Gates while paying 1/10 of the rent we pay today.
On the other hand, just because we can’t grasp something doesn’t make it impossible. I made that mistake myself, when 8 years ago I argued passionately that Windows vulnerabilities are impossible to exploit. I gave a very detailed reasoning. I thought I knew a lot about security. Two years later, David Litchfield gave a step-by-step explanation on how to exploit buffer overflows on Windows. Reading back what I wrote then, makes me want to get into the time machine again, visit the young me and hit myself with the clue stick (and tell the astonished me that whatever stupid thing I write will be saved forever and can be pulled by a search engine in less than a second. After that, I should probably give myself the lottery winning numbers and a travel brochure).
I figured people stopped making outrageous claims about what’s ‘impossible’ in computer security, and then I stumbled upon this. My favorite quote (attributed to Jon Callas, the CTO of PGP corporation):
[...] consider a cluster of [grain sized] computers, so many that if you covered the earth with them, they would cover the whole planet to the height of 1 meter. The cluster of computers would crack a 128-bit key on average in 1,000 years.
Really Jon? Sure, it can be backed up by the ‘exponential growth’ problem and by looking at the results of various distributed cracking projects. But will an encrypted message sent by a Coca Cola executive containing the secret formula and encrypted by a 128-bit PGP key survive brute force attacks 5 years from now? 10 years? 20? 30? Would you wager $23B a year on that? I wouldn’t.
Don’t get me wrong, brute force should not be the primary concern of someone securing their system from attack. It’s much easier to find an unpatched network vulnerability, or run a social-engineering attack to get what’s needed. But 2005 was an amazing year for cryptanalysis, with weaknesses found in major hashing algorithms and Chinese crypto experts leap-frogging what we thought was possible in some fields.
My advice? Whenever someone describes ‘impossible’ in terms of planets, atoms or large exponents ask them to give it to you in writing. 10 years later go back to them with a “what were you thinking?”. With some luck, they’ll be rich and famous and you could shame them in public. I’m saving mine for Jon Callas. Modern cryptographic systems are essentially unbreakable? Yeah, and 640k should be enough for anybody.