Hacking TiVO, PS2, Palm, GPRS, or your riding bikes


Keyless Entry Using Your Phone.

1) I keep telling people, the next security risk is the next technology that is there solely for “convenience.”

2) So, your credit cards are going to be in your cell, your bank access is going to be in your cell, your car keys are going to be in your cell, your house keys are going to be in your cell …  All your eggs in one basket–that gets dropped in the toilet, left in coats, drops between couch cushions, gets picked up in bars …

3) You can even unlock it remotely, so social engineering is on the table (“Hey, Mr. iPhone User, we’re from the gas company, and your neighbours are reporting a strong smell from your place, any way you could come back here from your conference on the other coast we found out about from your Facebook account and let us in?”)

4) You could use Wifi at close range, but for remote it probably has to have a unit that hooks up to your phone.  (I suppose another option is to have the locking device be a cellular device, but that seems excessive.)  So, as was mentioned, you have to worry about power outages.  Also interference from other Wifi devices, portable phones, cell phones, microwave ovens …

Hiring droids – “Would like like coffee breaks with that?”

What is true of teachers is also true for recruiters.

I am old enough to have gone through group interviews, hostile interviews, video interviews, multi-part phone interviews, questionnaire interviews, weird question interviews, “waht do you want to be when you grow up” interviews, and all the other “latest and greatest” ideas that swept through HR-land at one time or another.  I understand the intents of the various processes, and what they will and won’t tell you.  (When I do recruiting myself, I use the “prepared” interview model–know what it is you want, and how to find out if the candidate has it.)

So, apparently the next big thing in recruiting is to use technology.  Use robots.  (Well, actually just avatars and virtual game worlds.)  Use computerized questionnaires.  (They work just as well, and as badly, as paper ones.)  Use video.  (Wait.  We did that already.  Oh, I see, use videotape.)

It doesn’t take too long to see what the intent is here.  To save time and money.

And, doing it cheaper will work out just as well as doing it cheaper always has.

“There is hardly anything in the world that some man cannot make a little worse and sell a little cheaper, and the people who consider price only are this man’s lawful prey.        – John Ruskin

Quick way to find out if your account has been hacked?

In the wake of the recent account “hacks,” and fueled by the Yahoo (and, this morning, Android) breaches, An outfit called Avalanche (which seems to have ties to, or be the parent company of, the AVG antivirus) has launched

They are getting lots of press.

“If you don’t know, a website called will
tell you. Just enter your email—they won’t store your address unless
you ask them to—and click the button that says, “Check it.” If your
email has been associated with any of a large and ever-growing list
of known password breaches, including the latest Yahoo hack, the
site will let you know, and advise you to change it right away.”

Well, I tried it out, with an account that gets lots of spam anyway.  Lo and behold, that account was hacked!  Well, maybe.

(I should point out that, possibly given the popularity of the site, it is pig slow at the moment.)

The address I used is one I tend to give to sites, like recruiters and “register to get our free [fillintheblank]” outfits, that demand one.  It is for a local community site that used to be a “Free-net.”  I use a standard, low value password for registering on remote sites since I probably won’t be revisiting that site.  So I wasn’t completely surprised to see the address had been hacked.  I do get email through it, but, as noted, I also get (and analyse) a lot of spam.

When you get the notification, it tells you almost nothing.  Only that your account has been hacked, and when.  However, you can find a list of breaches, if you dig around on the site.  This list has dates.  The only breach that corresponded to the date I was given was the Strategic Forecasting breach.

I have, in the past, subscribed to Stratetgic Forecasting.  But only on the free list.  (Nothing on the free list ever convinced me that the paid version was worth it.)  So, my email address was listed in the Strategic Forecasting list.  But only my email address.  It never had a password or credit card number associated with it.

It may be worth it as a quick check.  However, there are obviously going to be so many false positives (like mine) and false negatives (LinkedIn isn’t in the list) that it is hard to say what the value is.

Apple and “identity pollution”

Apple has obtained a patent for “identity pollution,” according to the Atlantic.

I am of not just two, but a great many minds about this.  (OK, admit it: you always knew I was schizophrenic.)

First off, I wonder how in the world they got a patent for this.  OK, maybe there isn’t much in the way of prior art, but the idea can’t possibly be called “non-obvious.”  Even before the rise of “social networking” I was prompting friends to use my “loyalty” shopping cards, even the ones that just gave discounts and didn’t get you points.  I have no idea what those stores think I buy, and I don’t much care, but I do know that they have very little about my actual shopping patterns.

In our advice to the general population in regard to Internet and online safety in general, we have frequently suggested a) don’t say too much about yourself, and b) lie.  Isn’t this (the lying part) exactly what Apple is doing?

In similar fashion, I have created numerous socmed accounts which I never intended to use.  A number of them are simply unpopulated, but some contain false information.  I haven’t yet gone to the point of automating the process, but many others have.  So, yet another example of the US patent office being asleep (Rip-Van-Winkle-level asleep) at the technological switch.

Then there is the utility of the process.  Yes, OK, we can see that this might (we’ll come back to the “might”) help protect your confidentiality.  How can people find the “you” in all the garbage?  But what is true for advertisers, spammers, phishers, and APTers is also true for your friends.  How will the people who you actually *want* to find you, find the true you among all the false positives?

(Here is yet another example of the thre “legs” of the security triad fighting with each other.  We have endless examples of confidentiality and availability working against each other: now we have confidentiality and integrity at war.  How do you feel, in general, about Apple recommending that we creating even more garbage on the Internet than is already there?)

(Or is the fact that it is Apple that is doing this somehow appropriate?)

OK, then, will this work?  Can you protect the confidentiality of your real information with automated false information?  I can see this becoming yet another spam/anti-spam, CAPTCHA/CAPTCHA recognition, virus/anti-virus arms race.  An automated process will have identifiable signs, and those will be detected and used to ferret out the trash.  And then the “identity pollution” (a new kind of “IP”?) will be modified, and then the detection will be modified …

In th meantime, masses of bandwidth and storage will be consumed.  Socnet sites will be filled with meaningless accounts.  Users of socmed sites will be forced to spend even more time winnowing out those accounts not worth following.  Socnet companies will be forced to spend more on storage and determination of false accounts.  Also, their revenues will be cut as advertises realize that “targetted” ads will be less targetted.

Of course, Apple will be free to create a social networking site.  They already have created pieces of such.  And Apple can guarantee that Apple product users can use the site without impedance of identity pollution.  And, since Apple owns the patent, nobody else will be able to pollute identities on the Apple socnet site.

(And if Apple believes that, I have a bridge to sell them …)

Phecal photo phorensics

I suppose I really can’t let this one … pass …

Last weekend a young woman fell to her death while on a tandem hang glider ride with an experienced pilot.  The pilot, owner of a company that takes people on hang gliding rides for kicks, promises video of the event: the hang glider is equipped with some kind of boom-mounted camera pointed at the riders.

Somehow the police investigating the incident suspected that the pilot had swallowed the memory card from the video camera.  (Presumably the video was running, and presumably the pilot knew it would show something unfortunate.)  This was later confirmed by x-rays.

So, this week we have all been on “memory card movement” watch.

And it has cr… I mean, come out all right.

Who is responsible?

Galina Pildush ended her LTE presentation with a very good question:”Who is responsible for LTE security?  Is it the users? UE (User Equipment, handsets and devices) manufacturers and vendors?  Network providers, operators and telcos?”

It’s a great question, and one that needs to be applied to every area of security.

In the SOHO (Small Office/Home Office) and personal sphere, it has long been assumed that it’s the user who is responsible.  Long assumed, but possibly changing.  Apple, particularly with the iOS/iPhone/iPad lines, has moved toward a model where the vendor (Apple) locks down the device, and only allows you certain options for software and services.  Not all of them are produced or provided by Apple, but Apple gets vetting responsibilities and rights.

The original “user” responsibility model has not worked particularly well.  Most people don’t know how to protect themselves in regard to information security.  Malware and botnets are rampant.  In the “each man for himself” situation, many users do not protect themselves, with significant consequences for the computing environment as a whole.  (For years I have been telling corporations that they should support free, public security awareness training.  Not as advertising or for goodwill, but as a matter of self defence.  Reducing the number of infected users out there will reduce the level of risk in computing and communication as a whole.)

The “vendor” model, in Apple’s case (and Microsoft seems to be trying to move in that direction) has generated a reputation, at least, for better security.  Certainly infection and botnet membership rates appear to be lower in Macs than in Windows machines, and lower still in the iOS world.  (This, of course, does nothing to protect the user from phishing and other forms of fraud.  In fact, it would be interesting to see if users in a “walled garden” world were slightly more susceptible to fraud, since they were protected from other threats and had less need to be paranoid.)  The model also has significant advantages as a business model, where you can lock in users (and providers, as well), so it is obviously going to be popular with the vendors.

Of course, there are drawbacks, for the vendors, in this model.  As has been amply demonstrated in current mobile network situations, providers are very late in rolling out security patches.  This is because of the perception that the entire responsibility rests with the provider, and they want to test every patch to death before releasing it.  If that role falls to the vendors, they too will have to take more care, probably much more care, to ensure software is secure.  And that will delay both patch cycles and version cycles.

Which, of course, brings us to the providers.  As noted, there is already a problem here with patch releases.  But, after all, most attacks these days are network based.  Proper filtering would not only deal with intrusions and malware, but also issues like spam and fraud.  After all, if the phishing message never reaches the user, the user can’t be defrauded.

So, in theory, we can make a good case that the provider would be the most effective locus for responsibility for security.  They have the ability to address the broadest range of security issues.  In reality, of course, it wouldn’t work.

In the first place, all kinds of users wouldn’t stand for it.  Absent a monopoly market, any provider who tried to provide total security protection, would a) incur prohibitively heavy costs (putting pressure on their competitive rates), and b) lose a bunch of users who would resent restrictions and limitations.  (At present, of course, me know that many providers can get away with being pretty cavalier about security.)  The providers would also, as now, have to deal with a large range of devices.  And, if responsibility is lifted from the vendors, the situation will only get worse: vendors will be able to role out new releases and take even less care with testing than they do now.

In practical terms, we probably can’t, and shouldn’t decide this question.  All parties should take some responsibility, and all parties should take more than they do currently.  That way, everybody will be better off.  But, as Bruce Schneier notes, there are always going to be those who try and shirk their responsibility, relying on the fact that others will not.

LTE Cloud Security

LTE.  Even the name is complex: Long-Term Evolution of Evolved Universal Terrestrial Radio Access Network

All LTE phones (UE, User Equipment) are running servers.  Multiple servers.  (And almost all are unsecured at the moment.)

Because of the proliferation of protocols (GSM, GPRS, CDMA, additional 3 and 4G, and now LTE), the overall complexity of the mobile/cell cloud is growing.

LTE itself is fairly complex.  The Protocol Reference Model contains at least the GERAN User Plane, UTRAN User Plane, and E-UTRAN User Plane (all with multiple components) as well as the control plane.  A simplified model of a connection request involves at least nine messages involving six entities, with two more sitting on the sides.  The transport layer, SCTP, has a four-way, rather than two-way, handshake.  (Hence the need for all those servers.)  Basically, though, LTE is IP, but a fairly complex set of additional protocols, as opposed to the old PSTN.  The old public telephone network was a walled garden which few understood.  Just about all the active blackhats today understand IP, and it’s open.  It’s protected by Diameter, but even the Diameter implementation was loopholes.  It has a tunnelling protocol, GTP (GPRS Tunnelling Protocol), but, like very many tunnelling protocols, GTP does not provide confidentiality or integrity protection.

Everybody wants to the extra speed, functions, interconnection abilities, and apps.  But all the functionality means a much larger attack surface.  The total infrastructure involved in LTE is more complex.  Maybe nobody can know it all.  But they can know enough to start messing with it.  From a simple DoS to DDoS, false billing, disclosure of data, malware, botnets of the UEs, spam, SMS trojans, even run down batteries, you name it.

As with VoIP before it, we are rolling our known data vulnerabilities, and known voice/telco/PBX vulnerabilities, into one big insecurity.

Hacking Displays Made Easy

Displays are monitors, right?  Strictly output, right?


DVI and HDMI both support DDC, which allows for display identification and “capability advertisement.”  In other words, the display is sending information to the computer.  HDMI also has capabilities for “content protection,” and even has an ethernet channel.

And, of course, all these capabilities provide for neat ways to create trouble …  Lots of data comes from the display, and it has to be parsed.  And any time you are parsing data, you are, in a way, following instructions from outside the machine.

(Does anyone’s display programming take care not to trust the data coming from “the display”?  Care to take a guess, based on past experience?)

The data flying back and forth has a definite format: EDID.  There are standards.  Care to guess what can happen when you mess with the EDID data?  (And there are lots of ways it can get messed up unintentionally, starting with a simple KVM switch.)

In one case, experimenting was able to shut off system logging.  In another, EDID fuzzing was able to cause instability in the kernel.

(I’ve seen one in my own machine: on Win 7 and this hardware, plugging and unplugging USBs can shut off video feed to the display.  In two cases, attempting to recover the display crashed the system, hard.)