CCC: traffic analysis

the amazing steven murdoch did some traffic analysis on tor, trying to detect machines behind the annonymizing network. tor itself seems as secure as it had ever been, see comment below.
“by requesting timestamps from a computer, a remote adversary can find out the precise speed of its system clock. as each clock crystal is slightly different, and varies with temperature, this can act as a fingerprint ofthe computer and its location.”

ftp://ftp.fortunaty.net/video/23c3/wmv/timeskew2-t2s1.wmv
http://events.ccc.de/congress/2006/fahrplan/events/1513.en.html

anyone remember caida’s study on the crystals for detecting machines through nats?
http://www.caida.org/publications/papers/2005/fingerprinting/kohnobroidoclaffy05-devicefingerprinting.pdf

another good lecture on traffic analysis at ccc, which was an introduction by george danezis:
http://events.ccc.de/congress/2006/fahrplan/attachments/1185-danezistaintro.pdf

gadi evron,
ge@beyondsecurity.com.

Share

More CCC Presentations and Videos

other presentations i enjoyed, which i just noticed online:
pdf george danezis, introducing traffic analysis

wmv georg wicherski, automated botnet detection and mitigation

wmv gadi evron, fuzzing in the corporate world (yes, mine)

wmv ilja van sprundel, unusual bugs

pdf ilja van sprundel, unusual bugs

wmv michael steil, inside vmware

more here [mirror]. all mirrors, etc. can be found here. i hope everything becomes available soon.

gadi evron,
ge@beyondsecurity.com.

Share

Defeating Image-Based Virtual Keyboards and Phishing Banks

recently, i stumbled upon http://www.hispasec.com/laboratorio/cajamurcia_en.htm which nicely showed how a trojan horse can, utilizing a key stroke capture and screenshot capture, grab a user’s pin, fairly easily, and wondered why are they taking this approach when the pins can be easily retrieved by sniffing the data sent by the user to the banking site, even though they are “encrypted”.

image based keyboard (or virtual keyboards) were invented to make life harder for banking or phishing trojan horses (specifically key-stroke loggers or key loggers), some even suggested they be used specifically to avoid these trojan horses. the bad guys adapted to this technology and escalated. now the trojan horses take screenshots of where the mouse pointer is to determine what number they clicked on. thing is, it is often unnecessary as in most implementations of this technique that we looked into (meaning, not all) it was flawed.

instead of sending the remote image and waiting for the key-stroke information to be sent back to the server (the technique which the screenshots for pointer location on-click described above was used) some banks send the pin in cleartext, while others encrypt them, one such example is cajamurcia. even when the encryption is used, banks tend to implement it badly making it easy to recover the pin from the encrypted form.

i investigated a bit more on how cajamurcia handles such pin strokes (with virtual keyboards) and i noticed something strange, they take the timestamp of their server (cajamurcia) and send it to you – this already posses a security problem – and this timestamp is then used to encrypt the pin you entered.

this would have been a good idea if the timestamp was not sent back to the server, making it hard or semi-hard to guess the timestamp used to encrypt the data, but at the same time making it harder for the server to know what timestamp was provided to the client (unless they store it inside their session information). anyhow, as it is sent back to the server, we have everything we need to decrypt the data (pin).

poc:

a request to the server would look like:
(more…)

Share

Utimaco replies to SafeGuard Easy encryption key vulnerability

As reported on Bugtraq list last Friday:

However, it seems that the encryption keys are hardcoded directly in the EXE file. So, they are easily recoverable and all these CFG files can be easily compromised.

This case is related to encryption level of configuration files (.CFG) when installing several workstations at the same time with centralised management tools. SafeGuard Easy is for encrypting hard drives.
Company’s response entitled as Statement on SafeGuard Easy Articles regarding Configuration File Vulnerability is located here [2-p PDF]:
(more…)

Share

The sun will come out tomorrow

I remember when I was first introduced to DES. It was in some computer magazine whose name I can’t recall and it went something like “the DES algorithm is so powerful, that even if you could run several DES brute force attempts per second, the sun will die and our galaxy will be destroyed before you can try all the DES combinations. It made sense – 2^56 is a very big number, more than the measly 5-10 billion years our sun has to live. Back then there was also speculation on how the NSA could break it. It was a well-documented fact that the NSA made some subtle changes to the DES algorithm and the popular assumption was that they put in a ‘back door’ so that their supercomputer can break it. There had to be an NSA backdoor, since there were mathematical proofs on the impossibility of breaking DES in a reasonable time (like, within the age of the universe) or reasonable amount of money (lets say, within the entire worth of the world’s economy). Who can argue with a mathematical proof that contains a lot of exponents and relies on bullet proof analogies?

Almost a decade later I learned cryptology from Eli Biham, the inventor of differential cryptology. He spent a full lecture on the DES design and algorithm and we were all quite convinced that its 16 rounds and mysterious S-box design was unbreakable. Biham finished the lecture by saying “…and next week, I’ll tell you how DES is broken” and indeed the following week he taught us differential cryptanalysis. The method was unpractical and mostly theoretical, so it didn’t really “break” DES, but it showed the first weakness and I started losing faith in the whole “the world will end before…” jive.

It was only a few years after, that DES collapsed. It wasn’t with smart differential cryptology, though. It wasn’t even by finding the ‘secret NSA backdoor’ everybody was looking for in the 80s. In fact, many were shocked to discover the NSA change to the S-boxes actually made DES more resistance to differential cryptanalysis attacks. They didn’t want the algorithm to be weakened by other means, possibly because they could brute-force it way back then.
DES was broken because something unexpected happened. The processing power of a super computer from the 70s is weaker than the average PC sold at Walmart. In fact, a $500 PC running a standard operating system can try hundreds of thousands of DES combinations per second, while allowing its operator to play Solitaire. It’s not difficult to get hold of thousands or even tens of thousands of PCs (think a medium-size corporation after 5pm or a university during summer vacation) and you’ve got about a billion DES brute-force attempts per second. The sun will come up tomorrow, and the DES encrypted message will be broken by that time.

If I was to go back in time and tell a computer science professor that in 30 years an average person will have access to a processing power that is a billion times that of a super computer, I would be committed on the spot (or worse – sent to the social sciences department). Yes, I admit that it’s hard to anticipate something like that – keeping with the flawed analogies that would be like me telling you that in 30 years we’ll all be living in mansions like Bill Gates while paying 1/10 of the rent we pay today.

On the other hand, just because we can’t grasp something doesn’t make it impossible. I made that mistake myself, when 8 years ago I argued passionately that Windows vulnerabilities are impossible to exploit. I gave a very detailed reasoning. I thought I knew a lot about security. Two years later, David Litchfield gave a step-by-step explanation on how to exploit buffer overflows on Windows. Reading back what I wrote then, makes me want to get into the time machine again, visit the young me and hit myself with the clue stick (and tell the astonished me that whatever stupid thing I write will be saved forever and can be pulled by a search engine in less than a second. After that, I should probably give myself the lottery winning numbers and a travel brochure).

I figured people stopped making outrageous claims about what’s ‘impossible’ in computer security, and then I stumbled upon this. My favorite quote (attributed to Jon Callas, the CTO of PGP corporation):

[...] consider a cluster of [grain sized] computers, so many that if you covered the earth with them, they would cover the whole planet to the height of 1 meter. The cluster of computers would crack a 128-bit key on average in 1,000 years.

Really Jon? Sure, it can be backed up by the ‘exponential growth’ problem and by looking at the results of various distributed cracking projects. But will an encrypted message sent by a Coca Cola executive containing the secret formula and encrypted by a 128-bit PGP key survive brute force attacks 5 years from now? 10 years? 20? 30? Would you wager $23B a year on that? I wouldn’t.

Don’t get me wrong, brute force should not be the primary concern of someone securing their system from attack. It’s much easier to find an unpatched network vulnerability, or run a social-engineering attack to get what’s needed. But 2005 was an amazing year for cryptanalysis, with weaknesses found in major hashing algorithms and Chinese crypto experts leap-frogging what we thought was possible in some fields.

My advice? Whenever someone describes ‘impossible’ in terms of planets, atoms or large exponents ask them to give it to you in writing. 10 years later go back to them with a “what were you thinking?”. With some luck, they’ll be rich and famous and you could shame them in public. I’m saving mine for Jon Callas. Modern cryptographic systems are essentially unbreakable? Yeah, and 640k should be enough for anybody.

Share

Plain life is just not random enough

While trying to generate a gpg keypair on a remote server, I discovered I lack entropy. Eventually I had to physically type on the keyboard in order to generate enough random bytes.
A short research led me to the following startling thread in the Linux kernel mailing list; Someone suggested to disable the entropy gathering from network cards: http://marc.theaimsgroup.com/?l=linux-kernel&m=114684809230875&w=2
* Note that in stock kernel version, entropy is still gathered from network cards.

I see this as an extremely bad move. ‘Headless’ servers with no keyboard and mouse have very few ways to create random entropy.
Web servers are an extreme example. There are few disk events that can contribute to the amount of entropy, and on the other hand SSL connection requires a lot of randomness.

This decision, if indeed accepted, is completely absurd. If someone decides to cancel network card as a source to random number generation, at least leave it as an option to the kernel module, a /proc entry or something. Why just diff it out??

To make things worse, Intel used to provide an onboard random number generator. This initiative was torpedoed, and the chip no longer exists in modern boards. There goes another source of random entropy out the window.

Modern day servers requires more sources of entropy than ever. We use VPNs, SSH and HTTPS. Let’s face it, SSL is ubiquitous.

As an example, try to run 4 simultaneous ssh connections to a dedicated web server (for some time, at least 4-5 hours), and try to generate a GPG keypair. 9 out of 10 times you’ll be out of entropy.

Suggested solutions like gathering entropy from the sound card don’t cut it for production servers.
There are the of course the dedicated PCI cards: http://www.broadcom.com/collateral/pb/5802-PB05-R.pdf
http://www.idquantique.com/products/quantis.htm

But then we could also ask for a Schrödinger’s cat that sits in a conveniently located alternate universe to establish SSL handshakes for us.

Attacks on PRNGs are well documented. Today no one believes that clock interrupts are cryptographically random. For example, look at: http://www.gutterman.net/publications/GuttermanPinkasReinman2006.pdf

I would love to hear your opinions and suggestions from security point of view.

Share

Fiber-Optics Wiretaps: ISP Logistics, Technology and Security Analysis of the NSA’s Operation

us based folks may be more interested in the privacy implications of the recent at&t/nsa “gate”. i am too, but what interests me even more is the detailed technology disclosed on how at&t implemented sniffing on fiber optics, how isp’s handle the logistics of answering the legal call of wiretap needs, as well as analyzing possible security fail points in the nsa’s operation (if indeed it was theirs).

why the nsa did good
it’s been known for years that listening to optical lines is possible. it has been known for years the nsa listens to the internet. it has been known for years that much of the internet’s backbone sits in the us and at&t is a big part of that. it’s been known that us citizens also use the internet.

no one really wrote about how listening to optical lines is possible until now, or how, but my most serious reply to that is carbon-copied from a friend: duh.

how else did the american citizens expect the nsa to do this? there are naturally other ways which we will not discuss today, but the backbone sits on american soil, are you telling me the nsa should not use it? that is just plain silly.

the nsa’s mandate as far as i understand it, especially after the 70′s fiasco’s, is sigint on everything except us citizens/companies/etc. i bet it is very difficult to filter out such possible domestic communication, but that is why they have such brilliant minds working for them. which brings us to the fbi and carnivore -

why the fbi f*cked up working with isp’s
i should probably point out that if i was a major isp often asked to answer the call of law enforcement with legal wiretaps, this could be very annoying as well as technologically a killer to my network architecture.
just sticking some hub somewhere in my network may not cut it, and will certainly not cover all of the communication. what about different lines and locations?

as a large provider, at&t probably had to find better solutions to the call of the law, or reply on the law’s technology to not kill their business.

this indeed happened before. according to one nanoger at the fbi’s carnivore presentation a few years ago, “sticking” just such a hub is what caused his network to break-down.

creating a centralized wiretapping point under strict security may be just the thing to both comply and save costs, not to mention staying on the air.

the technology
unlike with copper lines where you can use the em emissions to “listen in” to the lines, or even cut them in half and connect them to a sniffer, with fiber optics you simply can’t. as you must be aware of, optical lines work by “transmitting” light. in order to listen in on that communication one must somehow see some of that light.
without going too much into how this actually works, the protocols using this layer-1 and layer-2 optical hardware beams a lot of redundant light, which bounces off the “walls” in different directions in the tube until at least one of the beams in the data stream reaches the next repeater/switching point/routing point. a single sustained beam of light is often used in bigger pipes, but these also have a lot of redundancy.

being able to use one photon for each bit of data is what everyone wants to do, but isn’t happening quite yet outside the lab. this would get even more interesting in the future with quantum cryptography.

in this paper released by wired detailing the spying operation from the perspective of an at&t employee, there are also a couple of other papers attached which detail the network architecture at&t used to enable sniffing of the information, as well as some interesting information from a related “legal wiretapping” technology conference, iss world.

operational f*k-ups?
ignoring the privacy and us legal issues for a moment, the nsa does not seem that stupid to me, as to trust the operation and technology to be developed by a third-party localized organization.
my guess is that at&t was asked to prepare the infrastructure where the nsa could use their own gear from. perhaps even under certain guidelines, conditions and rules (such as even security clearance for employees and key-pad combination locks, as the paper mentions).

writing a paper about it so that it can be recreated seems like a good idea.

a security issue which comes to mind here is how the information was handled. this reminds me of an incident in israel where ibm was contracted to do a certain job with the arrow anti-missile project, and some of the code in the system was legacy code which was originally developed in the egypt ibm office. this was a serious security concern in the israeli military industry, and was the result of lack of supervision over third-party contractors.

i don’t see “top secret” on the at&t document, which would at least mean this was meant to stay quiet. if it was, than at&t obviously wasn’t very much following the nsa’s wishes on security. we do see on some of the pages “at&t proprietary” and “use pursuant to company instructions”.

on the physical security level the “secret” room used for the spying seems to be somewhat in paranoid security mode with quite a bit of physical security measurements, probably by nsa decree… therefore i don’t know where the security breach occurred, but was this document supposed to be released? if not, who is at fault? at&t, the nsa or a traitor?

maybe non of the above. this doesn’t seem like a security breach to me.

i tend to believe this information was not a secret, but just a technical solution to a business problem with complying to a potentially hazardous technical requirement by the law.

it is possible although unlikely that the nsa decided the existence of the physical wiretap was not a secret (hey, congressional hearing?), nor was the fact that fiber optics can be sniffed. if that is the case i see no security implications here either.
however, if everything but the existence of the room was to be a secret, from what happens there (physical wiretapping for sigint purposes) to how (breaking the optical line), security was indeed breached.

was this breach critical? not in the slightest.

i doubt the nsa as a serious western intelligence organization, as well as a secretive one would want even that known. still, we don’t know what their technology to gather the data was, how the information was processed, how and where it was saved and where it was relayed to. then we don’t know which of it was actually seen by a human. we don’t know what their interest was, except a vague indication of “terrorism”.

seems like this was run smoothly after all, and we, due to lack of information, run to make the wrong conclusions.

my opinion
privacy implications.. what exactly was done with the wiretap, etc. we don’t know. it is far from me to even guess. it is well within the realm of possibility it was all used legally, but the infrastructure needed to exist for that. i am sure the different investigation bodies who will look into it will come to some sort of conclusions and find some scape-goats if indeed something evil was done.
they will probably even look into better monitoring of what the nsa does (i.e. more people in the know).

i don’t know much about the particulars of this case, nor what president bush instructed. that is for the high-paranoia privacy guys in the us to find out.

i doubt the nsa, fbi and others on their own have any reason to spy on or allow spying of us citizens and/or businesses. than again, i am not a us citizen, what do i know?

i know about logistics with network service providers, the business need to stay on the air and the problems of complying to such requests. i also know such wiretapping is possible and i know that the backbone sits on us soil.

what else do i need to know except that every other country in the world tries the same thing? well, that the internet is not a secure medium and people need to secure themselves. surprise people show sometimes shocks me.

gadi evron,
ge@beyondsecurity.com.

Share

Bypassing SSL in Phishing

here is a bit of “new stuff” (now old) that now becomes partially public from our friends at f-secure, and is very disturbing.

rootkits, ssl and phishing:

haxdoor is one of the most advanced rootkit malware out there. it is a kernel-mode rootkit, but most of its hooks are in user-mode. it actually injects its hooks to the user-mode from the kernel — which is really unique and kind of bizarre.

so, why doesn’t haxdoor just hook system calls in the kernel? a recent secure science paper has a good explanation for this. haxdoor is used for phishing and pharming attacks against online banks. pharming, according to anti-phishing working group (apwg), is an attack that misdirects users to fraudulent sites or proxy servers, typically through dns hijacking or poisoning.

we took a careful look at backdoor.win32.haxdoor.gh (detection added 31 jan, 2006). it hooks http functionality, redirects traffic, steals private information, and transmits the stolen data to a web-server controlled by the attacker. most (all?) online banks use ssl encrypted connections to protect transmissions. if haxdoor would hook networking functionality in the kernel, it would have hard time phishing since the data would be encrypted. by hooking on a high-enough api level it is able to grab the data before it gets encrypted. apparently haxdoor is designed to steal data especially from ie users, and not all tricks it plays work against, for example, firefox.
http://www.f-secure.com/weblog/archives/archive-022006.html#00000821

financial organizations that rely on encryption for security of web transactions can contact me for details on who to actually contact on answers if they haven’t been contacted by now, as this is the least of their worries.

gadi evron,
ge@beyondsecurity.com.

[corrected the title from: bypassing ssh in phishing]

Share

Cracking WEP with KisMac

Granted this is only done against a 40-bit WEP key, so that would explain why it only takes 10 minutes to obtain the WEP key, but this is still pretty good going either way. Plus you don’t have to keep changing tools to eventually obtain the WEP key.

The video is a bit blurry, but if you’ve got a Mac and have KisMac installed, it’s really not difficult to make out what’s going on at all.

Really worthwhile watch!!

http://www.ethicalhack.org/videos.php

Share

Challenge: how many combinations? (car immobilizer)

in my car i have a combination-lock immobilizer.

the combination lock is built like so:
[] [] [] []

four keys. each key however, holds more than just one digit:
[123] [45] [67] [890]

so a combination might be:
1234
2579
etc.

now, one might think that this can be used to reduce the number of times you have to hit every key when you brute-force the device, as the same keys would be hit for different digits.

some questions:
1. what is the minimum number of combination for brute-forcing the lock? how so?

2. the lock “locks up” for a few minutes (say, 3) after 3 failed attempts, how would this effect your method, if at all?

this is a rather easy one, but it may surprise you. i will post a follow-up once this one is solved to my satisfaction.

solutions are to be emailed to me directly rather than put in a comment. :)

gadi evron,
ge@beyondsecurity.com.

Share

Pandora.com’s box

Recently I discovered and tuned to Pandora service. Its really easy and fun to use this online music provider.

Once you are registered you can choose the music you want to listen by creating “channel(s)”, each channel identified by artist or a song name. Selected channel will stream out music classified by Music Genome Project that similar to those you have chosen when you created the channel. On each of the songs you further decide whether you like it or not, which in turn will fine tune channel specifications.

I loved Pandora from a first minute I heard its music and the first thing I noticed was that I don’t get those annoying pauses anymore when one of my fellow colleagues decide to download something huge. This actually amazed me, first time I got streaming music of this quality without those creepy noises. Well, I had to check how they did it.

I fired up a sniffer, looking for the incoming traffic going to the Pandora’s player. I was pretty amazed with when I discovered that the player sends plain HTTP GET requests, to download the songs in mp3 format. This means that the player does not really stream the music, it downloads it and then play it.

Next step was to open Live HTTP headers Firefox plugin, to grab the GET requests that download these mp3s :evil: .

Well, because I am a person that wears the right colored underwear, I dropped a mail to support dudes at Pandora.com.

Recently we discovered a security flaw in the Pandora service your company provides.

Pandora’s flash player sends an HTTP GET request to retrieve music it plays in mp3 files. Those links are static and do not require any kind of authorization to retrieve the files. Sniffing network traffic it is possible to get those links, thus revealing the static location of the mp3 file.

The impact of this problem is that it allows users to store music locally and to share music with others (even non Pandora.com users) by sending / posting the links.

Looking forward to your response.
…..

The response was of the sorts of, a.e. the flaw is not actually a flaw, rather it is a known feature :mad: .

Thanks for the heads up. We’re aware of this issue. Actually, the URL will only work for a short period of time while the song downloads, so its impossible to post them for others later.
…..

I stated that the links are static, and the links grabbed when sending the notification are still looking valid to me. Should I convince the vendor that I’m right ? Naaw, i’ll just blog it ;) . So actually you can share songs too, not only the channels.

P.S. The URI of the GET request consist of a long “token” named field, that seems encrypted,base64 and URL encoded to me. Interesting if somebody invested time to decompile the Flash to see if its possible to download any of 300.000 songs directly. Who knows maybe they use Blowfish cipher with a static key ? :evil: .
Anyways if somebody did, please keep us informed.

Share

Cryptoogle – Google One Time Pad Encryption

Cryptoogle is a new kind of encryption developed by the frozenbill, JaggedEdges or Gnome (whichever you know him by, it’s all the same person). Cryptoogle is designed to serve as an algorythm for securing data and putting a time-bomb on it. Whatever key you choose gets put through a Google query. Cryptoogle then assembles the results and uses this compilation as a key in a Blowfish cipher on whatever data you want to hide. The decrypter works exactly in reverse. This simple algorythm can protect your data far more than a normal cipher can.

Basically, the results returned by Google are used as a One Time Pad. But since the results found now, and in a few minutes/hours/days would be different, the original One Time Pad will be eventually lost. How effective is it? Very; this kind of Temporary One Time Pad seems to be quite secure, though its content is not very random. How long does it last? my tests varied depending on the Key used for the Google search, words like ‘google’, ‘microsoft’ lasted at least a few hours (they may still work now), while more common words like names of people lasted a few minutes.

BTW: I have seen more than once that the key didn’t work on one attempt, while it worked on another attempt… Google’s results appear to vary from request to request… could be that the Ads are affecting this as well?

Also I noted that writing a long sentence as the key, i.e. causing Google to return a single hit, was most effective in keeping the decryption working hours after the encryption took place.

(Thanks to WhiteAcid for pointing out this website)

Share