Networking

Who is responsible?

Galina Pildush ended her LTE presentation with a very good question:”Who is responsible for LTE security?  Is it the users? UE (User Equipment, handsets and devices) manufacturers and vendors?  Network providers, operators and telcos?”

It’s a great question, and one that needs to be applied to every area of security.

In the SOHO (Small Office/Home Office) and personal sphere, it has long been assumed that it’s the user who is responsible.  Long assumed, but possibly changing.  Apple, particularly with the iOS/iPhone/iPad lines, has moved toward a model where the vendor (Apple) locks down the device, and only allows you certain options for software and services.  Not all of them are produced or provided by Apple, but Apple gets vetting responsibilities and rights.

The original “user” responsibility model has not worked particularly well.  Most people don’t know how to protect themselves in regard to information security.  Malware and botnets are rampant.  In the “each man for himself” situation, many users do not protect themselves, with significant consequences for the computing environment as a whole.  (For years I have been telling corporations that they should support free, public security awareness training.  Not as advertising or for goodwill, but as a matter of self defence.  Reducing the number of infected users out there will reduce the level of risk in computing and communication as a whole.)

The “vendor” model, in Apple’s case (and Microsoft seems to be trying to move in that direction) has generated a reputation, at least, for better security.  Certainly infection and botnet membership rates appear to be lower in Macs than in Windows machines, and lower still in the iOS world.  (This, of course, does nothing to protect the user from phishing and other forms of fraud.  In fact, it would be interesting to see if users in a “walled garden” world were slightly more susceptible to fraud, since they were protected from other threats and had less need to be paranoid.)  The model also has significant advantages as a business model, where you can lock in users (and providers, as well), so it is obviously going to be popular with the vendors.

Of course, there are drawbacks, for the vendors, in this model.  As has been amply demonstrated in current mobile network situations, providers are very late in rolling out security patches.  This is because of the perception that the entire responsibility rests with the provider, and they want to test every patch to death before releasing it.  If that role falls to the vendors, they too will have to take more care, probably much more care, to ensure software is secure.  And that will delay both patch cycles and version cycles.

Which, of course, brings us to the providers.  As noted, there is already a problem here with patch releases.  But, after all, most attacks these days are network based.  Proper filtering would not only deal with intrusions and malware, but also issues like spam and fraud.  After all, if the phishing message never reaches the user, the user can’t be defrauded.

So, in theory, we can make a good case that the provider would be the most effective locus for responsibility for security.  They have the ability to address the broadest range of security issues.  In reality, of course, it wouldn’t work.

In the first place, all kinds of users wouldn’t stand for it.  Absent a monopoly market, any provider who tried to provide total security protection, would a) incur prohibitively heavy costs (putting pressure on their competitive rates), and b) lose a bunch of users who would resent restrictions and limitations.  (At present, of course, me know that many providers can get away with being pretty cavalier about security.)  The providers would also, as now, have to deal with a large range of devices.  And, if responsibility is lifted from the vendors, the situation will only get worse: vendors will be able to role out new releases and take even less care with testing than they do now.

In practical terms, we probably can’t, and shouldn’t decide this question.  All parties should take some responsibility, and all parties should take more than they do currently.  That way, everybody will be better off.  But, as Bruce Schneier notes, there are always going to be those who try and shirk their responsibility, relying on the fact that others will not.

LTE Cloud Security

LTE.  Even the name is complex: Long-Term Evolution of Evolved Universal Terrestrial Radio Access Network

All LTE phones (UE, User Equipment) are running servers.  Multiple servers.  (And almost all are unsecured at the moment.)

Because of the proliferation of protocols (GSM, GPRS, CDMA, additional 3 and 4G, and now LTE), the overall complexity of the mobile/cell cloud is growing.

LTE itself is fairly complex.  The Protocol Reference Model contains at least the GERAN User Plane, UTRAN User Plane, and E-UTRAN User Plane (all with multiple components) as well as the control plane.  A simplified model of a connection request involves at least nine messages involving six entities, with two more sitting on the sides.  The transport layer, SCTP, has a four-way, rather than two-way, handshake.  (Hence the need for all those servers.)  Basically, though, LTE is IP, but a fairly complex set of additional protocols, as opposed to the old PSTN.  The old public telephone network was a walled garden which few understood.  Just about all the active blackhats today understand IP, and it’s open.  It’s protected by Diameter, but even the Diameter implementation was loopholes.  It has a tunnelling protocol, GTP (GPRS Tunnelling Protocol), but, like very many tunnelling protocols, GTP does not provide confidentiality or integrity protection.

Everybody wants to the extra speed, functions, interconnection abilities, and apps.  But all the functionality means a much larger attack surface.  The total infrastructure involved in LTE is more complex.  Maybe nobody can know it all.  But they can know enough to start messing with it.  From a simple DoS to DDoS, false billing, disclosure of data, malware, botnets of the UEs, spam, SMS trojans, even run down batteries, you name it.

As with VoIP before it, we are rolling our known data vulnerabilities, and known voice/telco/PBX vulnerabilities, into one big insecurity.

Probing mobile (cell) networks

Mobile networks have many disparate types of devices.  You can probably guess what some of them are, or even go to the provider’s store or kiosk and get a list.  But there are going to be more devices out there.  So why not scan the IP addresses on your subnet?

Well, the access points for mobile networks generally don’t allow promiscuous access.  So you may have to go to ARIN and other lists in order to start getting some ranges to check.  You can also check access logs of a Website to find visitors with mobile devices.  (Of course, there is always the NATting that the providers do, not to mention DHCP, and the fact that most mobile devices don’t run servers or services.)

Colin Mulliner, of the Berlin Institute of Technology, did manage to find a fair amount of interesting stuff.  Windows Mobile tended to be a useful source of open ports and services (usually open FTP services on mobile devices).  He also found and was able to identify a number of specialized devices that were identifiable from responses to probes.  Some of the most interesting were mobile access points: connecting to the mobile networks and then providing local wifi for computers.  Others were HTTP servers for surveillance cameras.  (Others were GPS tracking devices which, oddly, had no security against “guest” login  :-)  (Some were smart meters.  With smart meters rolling out here in BC, lets hope they are more secure …)

Possibly of concern was the large number of jailbroken iOS devices.  Many of them still had the default “alpine” password.  (If you hack your own device, you’d better be prepared to secure it.)  This could form the basis of a fair sized worm and/or botnet.  Then again, iOS users aren’t alone here.  An awful lot of people seem to think nothing of creating mobile devices and hooking them up to mobile networks with very little in the way of security.

Smartphone vulnerabilities

Scott Kelly, platform architect at Netflix, gets to look at a lot of devices.  In depth.  He’s got some interesting things to say about smartphones.  (At CanSecWest.)

First of all, with a computer, you are the “tenant.”  You own the machine, and you can modify it any way you want.

On a smartphone, you are not the only tenant, and, in fact, you are the second tenant.  The provider is the first.  And where you may want to modify and customize it, the provider may not want you to.  They’d like to lock you in.  At the very least, they want to maintain some control because you are constantly on their network.

Now, you can root or jailbreak your phone.  Basically, that means hacking your phone.  Whether you do that or not, it does mean that your device is hackable.

(Incidentally, the system architectures for smartphones can be hugely complex.)

Sometimes you can simply replace the firmware.  Providers try to avoid doing that, sometimes looking at a secure boot system.  This is usually the same as the “trusted computing” (digital signatures that verify back to a key that is embedded in the hardware) or “trusted execution” (operation restriction) systems.  (Both types were used way back in AV days of old.)  Sometimes the providers ask manufacturers to lock the bootloader.  Attackers can get around this, sometimes letting a check succeed and then doing a swap, or attacking write protection, or messing with the verification process as it is occurring.  However, you can usually find easier implementation errors.  Sometimes providers/vendors use symmetric enryption: once a key is known, every device of that model is accessible.  You can also look at the attack surface, and with the complex architectures in smartphones the surface is enormous.

Vendors and providers are working towards trusted modules and trustzones in mobile devices.  Sometimes this is virtual, sometimes it actually involves hardware.  (Personally, I saw attempts at this in the history of malware.  Hardware tended to have inherent advantages, but every system I saw had some vulnerability somewhere.)

Patching has been a problem with mobile devices.  Again, the providers are going to be seen as responsible for ongoing operation.  Any problems are going to be seen as their fault.  Therefore, they really have to be sure that any patch they create is absolutely bulletproof.  It can’t create any problems.  So there is always going to be a long window for any exploit that is found.  And there are going to be vulnerabilities to exploit in a system this complex.  Providers and vendors are going to keep trying to lock systems.

(Again, personally, I suspect that hacks will keep on occurring, and that the locking systems will turn out to be less secure than the designers think.)

Scott is definitely a good speaker, and his slides and flow are decent.  However, most of the material he has presented is fairly generic.  CanSecWest audiences have come to expect revelations of real attacks.