Facebook Clickjacking

Hello folks,

I didn’t  imagine someone on *my* friends list will succumb to this attack, but apparently someone did… The URL is http://www.facebook.remove$me.com/pages/br-mhtwknyt-sms-zwkrym-qblw-yk-hw-nyrh-hywm/155174351168661

To fall for the attack, if you can’t read Hebrew – click on the right-most box in the page, then click on the big purple box with the green writing. You will notice a page with instructions, that translates to: “Dear viewer, due to the high number of hits we must make sure you’re human. Press the blue button, then the green then orange and finally red”. If you look at the lower left side, in characters 8px high, it has a disclaimer saying that by clicking on these buttons you allow the site to “like” in your behalf and publish in your profile. Completing the picture is the Facebook logo making the whole affair somewhat official. Nice social engineering job.
Firefox’s NoScript plugin successfully prevents the attack from taking place and also reveals the hidden UI underneath. The first button hides a “Like” button,  so the attack is self-perpetuating. Does that make it a worm? On one hand, it does self-perpetuate with the aid of the unsuspecting user (much like the user-assisted email worms). On the other hand, it doesn’t copy itself (the payload), so deleting it in one location will render the entire infection void.

Another, more interesting question is the follow-the-money question: Why would the attacker follow through with this attack? What is the incentive? The target link seems to be an SEO created website, so the incentive seems to be higher ranking and therefore higher revenue.

– Arik

Share

Privacy, The Illusion Of

In a recent blog entry, Google announced the production of a 4.5 minute movie about search privacy in Google. Let me quote the presenter, Maile Ohye:

“As you can see, logs don’t contain any truly personal information about you.” – Maile

I strongly suggest you watch the clip and have your own opinion. Below is my own:

What Maile neglects to mention is that Google keeps all the queries you submit together, correlated by your cookie, including the user you use to login to Google, the links you clicked on in search results, any site you visited with a Google ad, every address you mapped, every product you searched, every video you watched, etc. which makes up a nice profile of your behavior online.

If you slip – once – and search for something which is personal – a name of someone you know, your home address in Google maps, a nearby store, your email address – and it has that information in your profile too. If you use a Google account, it doesn’t even matter if you switch computers or expire the cookies.

I use Google a lot, I have a Google account and if you look it up you’ll probably know pretty much most of my interests and generally a lot about me. I am aware of the fact that this is so. It doesn’t stop me from using Google’s services – I like using Google’s services, and I know that one of the things that make them of value to me is the fact that Google knows a lot about me and what I do and where I go and what I care about. I don’t care, because I do not search with the same account, browser, cookie or IP address for things I don’t want Google to know about. How many people know enough about the Internet to take such measures? Not many, I guess.

So back to the clip. The video clip is market-speak (doublespeak? duckspeak?). It is marketing privacy as a differentiator for Google’s services, and portrays Google’s privacy practices as benign. In that sense, it serves its purpose. The problem that I can see is that privacy doesn’t need a lot of marketing. I don’t think you really need to market your privacy practices. The way I see it, the world is made out of 3 kinds of people:

1. Those who don’t care about privacy, they just graze around where the grazing is good, and are pretty much oblivious to such concerns. For these people, if you make an appealing product (not even a good product) and market it properly, and make it cool, they will come. Even if you trample their privacy, they will still come, because they don’t care. Reference: iPod. OMG I’m using a MacBook Pro now. Busted, I guess. People from this group wouldn’t care much, even if you wouldn’t have a privacy policy in place. Google already won them over, making Google a household name. Want to increase your market share here? Add a scroll wheel. Oh wait, that’s so early 2000s. add a touch screen.

2. Those who like their privacy but don’t really know much about privacy or privacy technology. These people are the to an extent conspiracy theorists. “Google keeps my email for good so they must be trying to control my mind! We’re dooooomed! Run away, run away!”. They are, as far as I can tell, a loud but small minority. Some times they’re so loud that it makes people from group #1 look around from their pasture, cock their head to one side, and, well, keep on grazing. Marketing privacy to these people will most likely just compound the conspiracy theories, because you wouldn’t do it unless you have something to hide. These people might just as well use Google’s services and perform some token ceremony to make sure that Google isn’t watching them, like expire their cookies or perhaps even clean their pages with greasemonkey. Oh well. I say to Google – let them be. There’s little you can do about it.

3. These are the people who are aware of the implications of using technology and either come to terms with it, or don’t play. I know some people who don’t play, and I can’t blame them. I personally am less hard-core, perhaps, because I agree to make a lot of my life more open to scrutiny in order to reap the benefits. It’s a risk, a managed risk. If there is some way this might come back to haunt me despite the precautions I’ve taken, well, I guess I’ll know it eventually, and I can only blame myself.

Have a doubleplus good day.

Disclaimer: All of the opinions presented here are my own and do not necessarily reflect the opinions of any entity I may be affiliated with.

Share

Skype’s encryption

If you haven’t heard about Skype, go check it out. Skype is a PC< -->PC and PC< -->POTS VoIP application.

In their web site, they claim that all their calls are encrypted:

Skype uses AES (Advanced Encryption Standard), also known as Rijndael, which is used by U.S. Government organizations to protect sensitive, information. Skype uses 256-bit encryption, which has a total of 1.1 x 1077 possible keys, in order to actively encrypt the data in each Skype call or instant message. Skype uses 1024 bit RSA to negotiate symmetric AES keys. User public keys are certified by the Skype server at login using 1536 or 2048-bit RSA certificates.

This quote really makes sense to an encryption expert. If:

  • I am to trust what Skype say here
  • Skype actually implemented what they say they did
  • Skype’s implementation is correct
  • Skype’s implementation is bug free

then this encryption is pretty good considering today’s standards.

But there’s no way for me to know. Skype, being closed-source, won’t let me look at their encryption code. As far as I know they might not be encrypting at all, or might doing so in a way that is vulnrable. I have absolutely no way to verify that their encryption is worth anything. For all intents and purposes, my Skype call is considered clear-text, because for all I know it might as well be so.

It all comes back to Trust. If you trust Skype, you can accept that your calls are encrypted. If you don’t (and frankly I have no reason to trust them) you cannot treat Skype conversations as encrypted.

[Originally posted in my blog -- Arik]

Update October 22nd:

In a strange coincidence, Skype just came out with this blog entry about an outside review of their system.

While this is laudable, I cannot see how this improves the security of their system. For all we know, the evaluation may be accurate for the piece of source code analyzed – but we know absolutely nothing on the security of the piece of binary that runs on our system. We can’t look into its code, nor can we do black-box testing with an interoperable client. We need to take them on their word that the security evaluation actually relates to the code running on my computer. We still need to trust Skype that this holds true.

Share

Secure by default

It’s not often that I buy stuff off the cuff. My buying habits are relatively conservative, and I usually do a lot of research on equipment before I buy it. This Friday was an exception to the rule – when I saw the WRT54GC in Fry’s for $40, I just couldn’t miss out. The device is very slender, very nearly pocket-sized, and has a built-in antenna with a jack for an external one and 5 ethernet ports (1 external).
WRT54GC
Wireless technology is in use for nearly a decade now, and securing a wireless network today is relatively easy. Yet as I plug this baby into the socket and hit refresh on the laptop, I see a new network: SSID linksys, channel 6, no encryption. Great. A few tweaks later and the device no longer publishes its SSID (no it’s not linksys anymore), and would only let you connect if you speak WPA2 to it. And ‘admin’ was a lame administrator password anyway.

Here’s a question for you: How many people actually go through the extra few clicks to secure their wireless device? If this device sold only 1000 units, I bet there are now 800 new open wireless networks.

Let’s consider the following imaginary scenario, involving Joe, your average computer user:

  1. Joe buys his new device and connects it to his cable modem, like the manual says
  2. Joe then looks for a wireless network with his laptop. There it is, SSID linksys, no encryption
  3. Joe connects to the unencrypted network and tries to browse the web
  4. Joe’s web connection is hijacked to a local web-server on the device, which asks him for a 6 digit code on a sticker on the device.

Several interesting things can happen now: Maybe Joe can surf the net immediately, while the device sets up a MAC filter for his current MAC address. Not very secure, but it’s better than nothing. Or Joe might have to choose a WPA key, and a small signed Java applet would setup his computer with the new key.

Now I’m not Joe, so maybe my perspective is all skewed. Is it really too much to ask from a user to go through a linear, consistent process before his network is set up, ensuring he is running an encrypted network, or at least MAC-filtered? Is it that much of an annoyance?

Is it more expensive to manufacture? The device already has an individualized sticker on it with the MAC address, I don’t think adding another 6 digits to it is much of a hassle, and the device already has an embedded web server. Yes, some more code.

Disclaimer 1: I know, this is still insecure, because Joe still uses a wireless unencrypted medium to transmit the code. It can be solved with an SSL web server, but even if it’s unencrypted, the window of vulnerability is greatly reduced.

Disclaimer 2: The WRT54GC came with a CD, which I never bothered to take out of its sleeve. I could see no reason to run software on my PC when I could just as well configure the device over the web. Perhaps Joe’s magic one-click access point securifier exists on that CD, and I just didn’t bother to check.

Originaly posted in my blog

Share

Drive-by spyware which, well, spies on you

CoolWebSearch (CWS) is an interesting advertising company. What sets it apart from your run-of-the-mill advertising company is that it specifically uses browser vulnerabilities (mainly Internet Explorer) to install its spyware/adware as the user browses one of its numerous affiliate sites. No user interaction whatsoever. This type of activity has been dubbed in the industry as ‘drive-by installation’.

It appears that an invisible boundary has been crossed now with a CWS variant that specifically collects keyboard strokes and potentially other information and posts it on an Internet server.

Check out Sunbelt Software blog and remember: This is just a blog and the alleged research results of a single person. Don’t jump to conclusions – not just yet.

Correction: CoolWebSearch probably has nothing to do with that; The trojan was found during a CWS investigation by a researcher, but it is not otherwise related to CoolWebSearch.

Share

Bluetooth-equipped cars vulnerable to eavesdropping

It seems like it’s possible to eavesdrop on bluetooth equipped cars. It seems like some car manufacturers made it especially easy to hook into the built-in bluetooth audio system and listen in or inject audio to these system.

The car manufacturers have done two things wrong:

  1. Allow unattended pairing

    Is it really too difficult to turn on pairing on demand? Every single phone headset I’ve seen only pairs when you press a specific key for some time or some other uncommon combination. Why can’t they do it in the car? This is clearly a case where security trumps ease of use, especially when the difference in the user’s experience is clicking one additional button every time they buy a new phone

  2. Use a default PIN

    Okay, so the bluetooth authentication process is not very secure, but why make life even easier for crackers? It’s not a process you perform every day, it’s a one-time-per-device process, adding a digit or two to the pin and randomizing it won’t reduce the number of potential users.

I can understand that developers have their time-to-market and other constraints and security is the easiest thing to give away, but sometimes you just need to put in the extra effort, or your customers will be eavesdropped on. I don’t really think the car developers really understood what the stakes are.

Share