Security unawareness

I really don’t understand the people who keep yelling that security awareness is no good.  Here’s the latest rant.

The argument is always the same: security awareness is not 100% foolproof protection against all possible attacks, so you shouldn’t (it is morally wrong to?) even try to teach security awareness in your company.

This guys works for  a security consultancy.  He says that instead of teaching awareness, you should concentrate on audit, monitoring, protecting critical data, segmenting the network, access creep, incident response, and strong security leadership.  (If we looked into their catalogue of seminars, I wonder what we would find them selling?)

Security awareness training isn’t guaranteed to be 100% effective protection.  Neither is AV, audit, monitoring, incident response, etc.  You still use those thing even though they don’t guarantee 100% protection.  You should at least try (seriously) to teach security awareness.  Maybe more than just a single 4 hour session.  (It’s called “defence in depth.”)

Tell you what: I’ll teach security awareness in my company, and you try a social engineering attack.  You may hit some of my people: people aren’t perfect.  But I’ll bet that at least some of my people will detect and report your social engineering attack.  And your data isolation won’t.

Share

Citizen cyber-protectors?

Marc Goodman (who I believe is FutureCrimes on Twitter and the Web) gave a recent TED talk on trends in the use of high technology in crime.

The 20 minute talk is frightening, with very little in the way of comfort for the protection or security side.  He ends with a call for crowdsourcing of protection.

Now as a transparent society/open source/full disclosure kind of guy, I like the general idea.  But, as someone who has been involved in education, security awareness, and professional security training for some time, I see a few problems.  For crowdsourcing to work, you need a critical mass of at least minimally capable people.  When you are talking about a weather reporting app, that minimal capability isn’t much. When you are talking about detecting cyberwar or bioweapons, the capability levels are a bit different.

Just yesterday the PNWER (Pacific NorthWest Economic Region) conference became the latest to bemoan the lack of trained employees.  I rather suspect these constant complaints, since I see lots of people out of work.  But the people who are whining about employees are just looking for network admins and such.  We need people with more depth and more breadth in their backgrounds.  I get CISSP candidates in my seminars who are network admins who simply want to know a few ACLS for firewalls.  I have to keep telling them that security professionals need to know more than that.

Yes, I am privileged to be able to meet a number who *are* interested in learning everything possible in order to meet any need or problem.  But, relatively speaking, those are few.  And my sample set tends to be abnormal, in that these are people who have already shown some interest in training (even if only job related).  What Goodman is talking about is the general public.  And those of us who have actually tried security awareness know how little conceptual awareness we have to build on, let alone advanced technical knowledge.

I think awareness, self-protection, and crowdsourcing is probably the only good way to approach the problems Goodman outlines.  I just worry that we have a long way to go.

Share

Quick way to find out if your account has been hacked?

In the wake of the recent account “hacks,” and fueled by the Yahoo (and, this morning, Android) breaches, An outfit called Avalanche (which seems to have ties to, or be the parent company of, the AVG antivirus) has launched https://shouldichangemypassword.com/

They are getting lots of press.

“If you don’t know, a website called ShouldIChangeMyPassword.com will
tell you. Just enter your email—they won’t store your address unless
you ask them to—and click the button that says, “Check it.” If your
email has been associated with any of a large and ever-growing list
of known password breaches, including the latest Yahoo hack, the
site will let you know, and advise you to change it right away.”

Well, I tried it out, with an account that gets lots of spam anyway.  Lo and behold, that account was hacked!  Well, maybe.

(I should point out that, possibly given the popularity of the site, it is pig slow at the moment.)

The address I used is one I tend to give to sites, like recruiters and “register to get our free [fillintheblank]” outfits, that demand one.  It is for a local community site that used to be a “Free-net.”  I use a standard, low value password for registering on remote sites since I probably won’t be revisiting that site.  So I wasn’t completely surprised to see the address had been hacked.  I do get email through it, but, as noted, I also get (and analyse) a lot of spam.

When you get the notification, it tells you almost nothing.  Only that your account has been hacked, and when.  However, you can find a list of breaches, if you dig around on the site.  This list has dates.  The only breach that corresponded to the date I was given was the Strategic Forecasting breach.

I have, in the past, subscribed to Stratetgic Forecasting.  But only on the free list.  (Nothing on the free list ever convinced me that the paid version was worth it.)  So, my email address was listed in the Strategic Forecasting list.  But only my email address.  It never had a password or credit card number associated with it.

It may be worth it as a quick check.  However, there are obviously going to be so many false positives (like mine) and false negatives (LinkedIn isn’t in the list) that it is hard to say what the value is.

Share

LinkeDin!

No!  I’m *not* asking for validation to join a security group on LinkedIn!

Apparently several million passwords have been leaked in an unsalted file, and multiple entities are working on cracking them, even as we speak.  (Type?)

So, odds are “low but significant” that your LinkedIn account password may have been cracked.  (Assuming you have a LinkedIn account.)  So you’d better change it.

And you might think about changing the password on any other accounts you have that use the same password.  (But you’re all security people, right?  You’d *never* use the same password on multiple accounts …)

Share

Flaming certs

Today is Tuesday for me, but it’s not “second Tuesday,” so it shouldn’t be patch Tuesday.  But today my little netbook, which is set just to inform me when updates are available, informed me that it had updated, but I needed to reboot to complete the task, and, if I didn’t do anything in the next little while it was going to reboot anyway.

Yesterday, of course, wasn’t patch Tuesday, but all my machines set to “go ahead and update” all wanted to update on shutdown last night.

This is, of course, because of Flame (aka Flamer, aka sKyWIper) has an “infection” module that messes with Windows/Microsoft Update.  As I understand it, there is some weakness in the update process itself, but the major problem is that Flame “contains” and uses a fake Microsoft digital certificate.

You can get some, but not very much, information about this from Microsoft’s Security Response Center blog.  (Early mentionLater.)

You can get more detailed information from F-Secure.

It’s easy to see that Microsoft is extremely concerned about this situation.  Not necessarily because of Flame: Flame uses pretty old technology, only targets a select subset of systems, and doesn’t even run on Win7 64-bit.  But the fake cert could be a major issue.  Once that cert is out in the open it can be used not only for Windows Update, but for “validating” all kinds of malware.  And, even though Flame only targets certain systems, and seems to be limited in geographic extent, I have pretty much no confidence at all that the blackhat community hasn’t already got copies of it.  (The cert doesn’t necessarily have to be contained in the Flame codebase, but the structure of the attack seems to imply that it is.)  So, the only safe bet is that the cert is “in the wild,” and can be used at any time.

(Just before I go on with this, I might say that the authors of Flame, whoever they may be, did no particularly bad thing in packaging up a bunch of old trojans into one massive kit.  But putting that fake cert out there was simply asking for trouble, and it’s kind of amazing that it hasn’t been used in an attack beofre now.)

The first thing Microsoft is doing is patching MS software so that it doesn’t trust that particular cert.  They aren’t giving away a lot of detail, but I imagine that much midnight oil is being burned in Redmond redoing the validation process so that a fake cert is harder to use.  Stay tuned to your Windows Update channel for further developments.

However, in all of this, one has to wonder where the fake cert came from.  It is, of course, always possible to simply brute force a digital signature, particularly if you have a ton of validated MS software, and a supercomputer (or a huge botnet), and mount a birthday (collision) attack.  (And everyone is assuming that the authors of Flame have access to the resources of a nation-state.  Or two …)  Now the easier way is simply to walk into the cert authority and ask for a couple of Microsoft certs.  (Which someone did one time.  And got away with it.)

But then, I was thinking.  In the not too distant past, we had a whole bunch of APT attacks (APT being an acronym standing for “we were lazy about our security, but it really isn’t our fault because these attackers didn’t play fair!”) on cert authorities.  And the attacks got away with a bunch of valid certs.

OK, we think Flame is possibly as much a five years in the wild, and almost certainly two years.  But it is also likely that there were updates during the period in the wild, so it’s hard to say, right off the top, which parts of it were out there for how long.

And I just kind of wonder …

Share

Flame on!

I have been reading about the new Flame (aka Flamer, aka sKyWIper) “supervirus.”

[AAaaaarrrrrrggggghhhh!!!!!!!!  Sorry.  I will try and keep the screaming, in my "outside voice," to a minimum.]

From the Telegraph:

This “virus” [1] is “20 times more powerful” than any other!  [Why?  Because it has 20 times more code?  Because it is running on 20 times more computers?  (It isn't.  If you aren't a sysadmin in the Middle East you basically don't have to worry.)  Because the computers it is running on are 20 times more powerful?  This claim is pointless and ridiculous.]

[I had it right the first time.  The file that is being examined is 20 megabytes.  Sorry, I'm from the old days.  Anybody who needs 20 megs to build a piece of malware isn't a genius.  Tight code is *much* more impressive.  This is just sloppy.]

It “could only have been created by a state.”  [What have you got against those of us who live in provinces?]

“Flame can gather data files, remotely change settings on computers, turn on computer microphones to record conversations, take screen shots and copy instant messaging chats.”  [So?  We had RATs that could do that at least a decade ago.]

“… a Russian security firm that specialises in targeting malicious computer code … made the 20 megabyte virus available to other researchers yesterday claiming it did not fully understand its scope and said its code was 100 times the size of the most malicious software.”  [I rather doubt they made the claim that they didn't understand it.  It would take time to plow through 20 megs of code, so it makes sense to send it around the AV community.  But I still say these "size of code" and "most malicious" statements are useless, to say the least.]

It was “released five years ago and had infected machines in Iran, Israel, Sudan, Syria, Lebanon, Saudi Arabia and Egypt.”  [Five years?  Good grief!  This thing is a pretty wimpy virus!  (Or self-limiting in some way.)  Even in the days of BSIs and sneakernet you could spread something around the world in half a year at most.]

“If Flame went on undiscovered for five years, the only logical conclusion is that there are other operations ongoing that we don’t know about.”  [Yeah.  Like "not reproducing."]

“The file, which infects Microsoft Windows computers, has five encryption algorithms,”  [Gosh!  The best we could do before was a couple of dozen!]  “exotic data storage formats”  [Like "not plain text."]  “and the ability to steal documents, spy on computer users and more.”  [Yawn.]

“Components enable those behind it, who use a network of rapidly-shifting “command and control” servers to direct the virus …”  [Gee!  You mean like a botnet or something?]

 

Sorry.  Yes, I do know that this is supposed to be (and probably is) state-sponsored, and purposefully written to attack specific targets and evade detection.  I get it.  It will be (marginally) interesting to see what they pull out of the code over the next few years.  It’s even kind of impressive that someone built a RAT that went undetected for that long, even though it was specifically built to hide and move slowly.

But all this “supervirus” nonsense is giving me pains.

 

[1] First off, everybody is calling it a “virus.”  But many reports say they don’t know how it got where it was found.  Duh!  If it’s a virus, that’s kind of the first issue, isn’t it?

Share

Words to leak by …

The Department of Homeland Security has been forced to release a list of keywords and phrases it uses to monitor social networking sites and online media.  (Like this one?)

This wasn’t “smart.”  Obviously some “pork” barrel project dreamed up by the DHS “authorities” “team” (“Hail” to them!) who are now “sick”ly sorry they looked into “cloud” computing “response.”  They are going to learn more than they ever wanted to know about “exercise” fanatics going through the “drill.”

Hopefully this message won’t “spillover” and “crash” their “collapse”d parsing app, possibly “strain”ing a data “leak.”  You can probably “plot” the failures at the NSA as the terms “flood” in.  They should have asked us for “help,” or at least “aid.”

Excuse, me, according to the time on my “watch,” I have to leave off working on this message, “wave” bye-bye, and get some “gas” in the car, and then get a “Subway” for the “nuclear” family’s dinner.  Afterwards, we’re playing “Twister”!

(“Dedicated denial of service”?  Really?)

Share

Ad-Aware

I’ve used Ad-Aware in the past, and had it installed on my machine.  Today it popped up and told me it was out of date.  So, at their suggestion, I updated to the free version, which is now, apparently, called Ad-Aware Free Antivirus+.  It provides for real-time scanning, Web browsing protection, download protection, email protection, and other functions.  Including “superfast” antivirus scanning.  I installed it.

And almost immediately removed it from the machine.

First off, my machine bogged down to an unusable state.  The keyboard and mouse froze frequently, and many programs (including Ad-Aware) were unresponsive for much of the time.  Web browsing became ludicrous.

There are some settings in the application.  For my purposes (as a malware researcher) they were inadequate.  There is an “ignore” list, but I was completely unable to get the program to “ignore” my malware zoo, even after repeated efforts.  (The interface for that function is also bizarrely complex.)  However, I’m kind of a non-typical user.  However, the other options would be of little use to anyone.  For the most part they were of the “on or off” level, and provide almost no granularity.  That makes them simple to use, but useless.

I’ve never used Ad-Aware much, but it’s disappointing to see yet another relatively decent tool “improved” into non-utility.

Share

REVIEW: “Dark Market: CyberThieves, CyberCops, and You”, Misha Glenny

BKDRKMKT.RVW 20120201

“Dark Market: CyberThieves, CyberCops, and You”, Misha Glenny, 2011,
978-0-88784-239-9, C$29.95
%A   Misha Glenny
%C   Suite 801, 110 Spadina Ave, Toronto, ON Canada  M5V 2K4
%D   2011
%G   978-0-88784-239-9 0-88784-239-9
%I   House of Anansi Press Ltd.
%O   C$29.95 416-363-4343 fax 416-363-1017 www.anansi.ca
%O  http://www.amazon.com/exec/obidos/ASIN/0887842399/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/0887842399/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0887842399/robsladesin03-20
%O   Audience n Tech 1 Writing 2 (see revfaq.htm for explanation)
%P   296 p.
%T   “Dark Market: CyberThieves, CyberCops, and You”

There is no particular purpose stated for this book, other than the vague promise of the subtitle that this has something to do with bad guys and good guys in cyberspace.  In the prologue, Glenny admits that his “attempts to assess when an interviewee was lying, embellishing or fantasising and when an interviewee was earnestly telling the truth were only partially successful.”  Bear in mind that all good little blackhats know that, if you really want to get in, the easiest thing to attack is the person.  Social engineering (which is simply a fancy way of saying “lying”) is always the most effective tactic.

It’s hard to have confidence in the author’s assessment of security on the Internet when he knows so little of the technology.  A VPN (Virtual Private Network) is said to be a system whereby a group of computers share a single address.  That’s not a VPN (which is a system of network management, and possibly encryption): it’s a description of NAT (Network Address Translation).  True, a VPN can, and fairly often does, use NAT in its operations, but the carelessness is concerning.

This may seem to be pedantic, but it leads to other errors.  For example, Glenny asserts that running a VPN is very difficult, but that encryption is easy, since encryption software is available on the Internet.  While it is true that the software is available, that availability is only part of the battle.  As I keep pointing out to my students, for effective protection with encryption you need to agree on what key to use, and doing that negotiation is a non-trivial task.  Yes, there is asymmetric encryption, but that requires a public key infrastructure (PKI) which is an enormously difficult proposition to get right.  Of the two, I’d rather run a VPN any day.

It is, therefore, not particularly surprising that the author finds that the best way to describe the capabilities of one group of carders was to compare them to the fictional “hacking” crew from “The Girl with the Dragon Tattoo.”  The activities in the novel are not impossible, but the ability to perform them on demand is highly
unlikely.

This lack of background colours his ability to ascertain what is possible or not (in the technical areas), and what is likely (out of what he has been told).  Sticking strictly with media reports and indictment documents, Glenny does a good job, and those parts of the book are interesting and enjoyable.  The author does let his taste for mystery get the better of him: even the straight reportage parts of the book are often confusing in terms of who did what, and who actually is what.

Like Dan Verton (cf BKHCKDRY.RVW) and Suelette Dreyfus (cf. BKNDRGND.RVW) before him, Glenny is trying to give us the “inside story” of the blackhat community.  He should have read Taylor’s “Hackers” (cf BKHAKERS.RVW) first, to get a better idea of the territory.  He does a somewhat better job than Dreyfus and Verton did, since he is wise enough to seek out law enforcement accounts (possibly after reading Stiennon’s “Surviving Cyberwar,” cf. BKSRCYWR.RVW).

Overall, this work is a fairly reasonable updating of Levy’s “Hackers” (cf. BKHACKRS.RVW) of almost three decades ago.  The rise of the financial motivation and the specialization of modern fraudulent blackhat activity are well presented.  There is something of a holdover in still portraying these crooks as evil genii, but, in the main, it is a decent picture of reality, although it provides nothing new.

copyright, Robert M. Slade   2012    BKDRKMKT.RVW 20120201

Share

The speed of “social” …

I made a posting on the blog.

Then I moved on to checking news, which I do via Twitter.  And, suddenly, there in my stream was a “tweet” that, fairly obviously, referred to my posting.  By someone I didn’t know, and had never heard of.  From Indonesia.

This blog now has an RSS feed.  Apparently a few people are following that feed.  And, seemingly, every time something gets posted here, it gets copied onto their blogs.

And, in at least one case, that post gets automatically (and programmatically) posted on Twitter.

I would never have known any of this, except that the posting I had made was in reference to something I had found via those stalwarts at the Annals of Improbable Research.  I had made reference to that fact in the first line.  The application used to generate the Twitter posting copies roughly the first hundred characters of the blog post, so the Improbable Research account (pretty much automatically) retweeted the programmed tweet of the blog posting that copied my original blog posting.  I follow Improbable Research on Twitter, so I got the retweet.

This set me to a little exploration.  I found, checking trackbacks, that every one of my postings was being copied to seven different blogs.  Blogs run by people of whom I’d never heard.  (Most of whom don’t seem to have any particular interest in infosec, which is rather odd.)

Well, this blog is public, and my postings are public, so I really can’t complain when the material goes public, even if in a rather larger way than I originally thought.  But it does underline the fact that, once posted on the Internet, it is very unsafe to assume that any information is confidential.  You can’t delete data once it has passed to machines beyond your control.

And it passes very, very fast.

Share

Who is responsible?

Galina Pildush ended her LTE presentation with a very good question:”Who is responsible for LTE security?  Is it the users? UE (User Equipment, handsets and devices) manufacturers and vendors?  Network providers, operators and telcos?”

It’s a great question, and one that needs to be applied to every area of security.

In the SOHO (Small Office/Home Office) and personal sphere, it has long been assumed that it’s the user who is responsible.  Long assumed, but possibly changing.  Apple, particularly with the iOS/iPhone/iPad lines, has moved toward a model where the vendor (Apple) locks down the device, and only allows you certain options for software and services.  Not all of them are produced or provided by Apple, but Apple gets vetting responsibilities and rights.

The original “user” responsibility model has not worked particularly well.  Most people don’t know how to protect themselves in regard to information security.  Malware and botnets are rampant.  In the “each man for himself” situation, many users do not protect themselves, with significant consequences for the computing environment as a whole.  (For years I have been telling corporations that they should support free, public security awareness training.  Not as advertising or for goodwill, but as a matter of self defence.  Reducing the number of infected users out there will reduce the level of risk in computing and communication as a whole.)

The “vendor” model, in Apple’s case (and Microsoft seems to be trying to move in that direction) has generated a reputation, at least, for better security.  Certainly infection and botnet membership rates appear to be lower in Macs than in Windows machines, and lower still in the iOS world.  (This, of course, does nothing to protect the user from phishing and other forms of fraud.  In fact, it would be interesting to see if users in a “walled garden” world were slightly more susceptible to fraud, since they were protected from other threats and had less need to be paranoid.)  The model also has significant advantages as a business model, where you can lock in users (and providers, as well), so it is obviously going to be popular with the vendors.

Of course, there are drawbacks, for the vendors, in this model.  As has been amply demonstrated in current mobile network situations, providers are very late in rolling out security patches.  This is because of the perception that the entire responsibility rests with the provider, and they want to test every patch to death before releasing it.  If that role falls to the vendors, they too will have to take more care, probably much more care, to ensure software is secure.  And that will delay both patch cycles and version cycles.

Which, of course, brings us to the providers.  As noted, there is already a problem here with patch releases.  But, after all, most attacks these days are network based.  Proper filtering would not only deal with intrusions and malware, but also issues like spam and fraud.  After all, if the phishing message never reaches the user, the user can’t be defrauded.

So, in theory, we can make a good case that the provider would be the most effective locus for responsibility for security.  They have the ability to address the broadest range of security issues.  In reality, of course, it wouldn’t work.

In the first place, all kinds of users wouldn’t stand for it.  Absent a monopoly market, any provider who tried to provide total security protection, would a) incur prohibitively heavy costs (putting pressure on their competitive rates), and b) lose a bunch of users who would resent restrictions and limitations.  (At present, of course, me know that many providers can get away with being pretty cavalier about security.)  The providers would also, as now, have to deal with a large range of devices.  And, if responsibility is lifted from the vendors, the situation will only get worse: vendors will be able to role out new releases and take even less care with testing than they do now.

In practical terms, we probably can’t, and shouldn’t decide this question.  All parties should take some responsibility, and all parties should take more than they do currently.  That way, everybody will be better off.  But, as Bruce Schneier notes, there are always going to be those who try and shirk their responsibility, relying on the fact that others will not.

Share

LTE Cloud Security

LTE.  Even the name is complex: Long-Term Evolution of Evolved Universal Terrestrial Radio Access Network

All LTE phones (UE, User Equipment) are running servers.  Multiple servers.  (And almost all are unsecured at the moment.)

Because of the proliferation of protocols (GSM, GPRS, CDMA, additional 3 and 4G, and now LTE), the overall complexity of the mobile/cell cloud is growing.

LTE itself is fairly complex.  The Protocol Reference Model contains at least the GERAN User Plane, UTRAN User Plane, and E-UTRAN User Plane (all with multiple components) as well as the control plane.  A simplified model of a connection request involves at least nine messages involving six entities, with two more sitting on the sides.  The transport layer, SCTP, has a four-way, rather than two-way, handshake.  (Hence the need for all those servers.)  Basically, though, LTE is IP, but a fairly complex set of additional protocols, as opposed to the old PSTN.  The old public telephone network was a walled garden which few understood.  Just about all the active blackhats today understand IP, and it’s open.  It’s protected by Diameter, but even the Diameter implementation was loopholes.  It has a tunnelling protocol, GTP (GPRS Tunnelling Protocol), but, like very many tunnelling protocols, GTP does not provide confidentiality or integrity protection.

Everybody wants to the extra speed, functions, interconnection abilities, and apps.  But all the functionality means a much larger attack surface.  The total infrastructure involved in LTE is more complex.  Maybe nobody can know it all.  But they can know enough to start messing with it.  From a simple DoS to DDoS, false billing, disclosure of data, malware, botnets of the UEs, spam, SMS trojans, even run down batteries, you name it.

As with VoIP before it, we are rolling our known data vulnerabilities, and known voice/telco/PBX vulnerabilities, into one big insecurity.

Share

Hacking Displays Made Easy

Displays are monitors, right?  Strictly output, right?

Wrong.

DVI and HDMI both support DDC, which allows for display identification and “capability advertisement.”  In other words, the display is sending information to the computer.  HDMI also has capabilities for “content protection,” and even has an ethernet channel.

And, of course, all these capabilities provide for neat ways to create trouble …  Lots of data comes from the display, and it has to be parsed.  And any time you are parsing data, you are, in a way, following instructions from outside the machine.

(Does anyone’s display programming take care not to trust the data coming from “the display”?  Care to take a guess, based on past experience?)

The data flying back and forth has a definite format: EDID.  There are standards.  Care to guess what can happen when you mess with the EDID data?  (And there are lots of ways it can get messed up unintentionally, starting with a simple KVM switch.)

In one case, experimenting was able to shut off system logging.  In another, EDID fuzzing was able to cause instability in the kernel.

(I’ve seen one in my own machine: on Win 7 and this hardware, plugging and unplugging USBs can shut off video feed to the display.  In two cases, attempting to recover the display crashed the system, hard.)

Share

Smartphone vulnerabilities

Scott Kelly, platform architect at Netflix, gets to look at a lot of devices.  In depth.  He’s got some interesting things to say about smartphones.  (At CanSecWest.)

First of all, with a computer, you are the “tenant.”  You own the machine, and you can modify it any way you want.

On a smartphone, you are not the only tenant, and, in fact, you are the second tenant.  The provider is the first.  And where you may want to modify and customize it, the provider may not want you to.  They’d like to lock you in.  At the very least, they want to maintain some control because you are constantly on their network.

Now, you can root or jailbreak your phone.  Basically, that means hacking your phone.  Whether you do that or not, it does mean that your device is hackable.

(Incidentally, the system architectures for smartphones can be hugely complex.)

Sometimes you can simply replace the firmware.  Providers try to avoid doing that, sometimes looking at a secure boot system.  This is usually the same as the “trusted computing” (digital signatures that verify back to a key that is embedded in the hardware) or “trusted execution” (operation restriction) systems.  (Both types were used way back in AV days of old.)  Sometimes the providers ask manufacturers to lock the bootloader.  Attackers can get around this, sometimes letting a check succeed and then doing a swap, or attacking write protection, or messing with the verification process as it is occurring.  However, you can usually find easier implementation errors.  Sometimes providers/vendors use symmetric enryption: once a key is known, every device of that model is accessible.  You can also look at the attack surface, and with the complex architectures in smartphones the surface is enormous.

Vendors and providers are working towards trusted modules and trustzones in mobile devices.  Sometimes this is virtual, sometimes it actually involves hardware.  (Personally, I saw attempts at this in the history of malware.  Hardware tended to have inherent advantages, but every system I saw had some vulnerability somewhere.)

Patching has been a problem with mobile devices.  Again, the providers are going to be seen as responsible for ongoing operation.  Any problems are going to be seen as their fault.  Therefore, they really have to be sure that any patch they create is absolutely bulletproof.  It can’t create any problems.  So there is always going to be a long window for any exploit that is found.  And there are going to be vulnerabilities to exploit in a system this complex.  Providers and vendors are going to keep trying to lock systems.

(Again, personally, I suspect that hacks will keep on occurring, and that the locking systems will turn out to be less secure than the designers think.)

Scott is definitely a good speaker, and his slides and flow are decent.  However, most of the material he has presented is fairly generic.  CanSecWest audiences have come to expect revelations of real attacks.

Share

Social authentication and solar storms

Well, I thought it was ironic that the biggest solar storm in years is hitting the earth tonight … while CanSecWest is on …

So far today we have had talks on security (and vulnerabilities) during the boot process, a talk on pen testing (and the presenter seemed to be alternately talking about how to choose a pen tester, and how to do pen testing), and social authentication.

The social authentication talk was by Alex Rice from Facebook.  He noted that, even though Facebook only challenges a small fraction of a percent of logins, given the user base that means more then a million every day.  When a login is challenged, a standard response has been the good old “security questions”: mother’s maiden name, birthdate, and other pieces of information that might not be too hard for someone intent on breaking into your account to find out.

Alex went through the limitations of security questions, and then moved to other possibilities.  Security questions comes under the heading of “things you know,” so they looked at “things you have.”  For example, you have to have an email address, so there is the possibility of a challenge sent to your email.  (Google, of course, figures that everyone in the world has a cell phone that can receive text messages.)

Recently, Facebook has started to use the photos that people post on their pages, particularly those that have been tagged.  Basically, if your login gets challenged, you will be shown a series of pictures, and you should be able to identify who is, or is not, in the picture, out of your list of friends.  This is the subject of a blog post noting that it isn’t perfect.

There are additional problems.  As the post notes, the situation is less than ideal if you have a huge number of “friends.”  (As Bruce Schneier’s new book notes, if you have more than 150 friends, you probably aren’t friends with many of them.)  Even if you do know your “friends,” there is nothing to say that any given picture of them will be recognizable.  In fact, since the system relies on tagging, there are going to be pictures of weird objects that people have deliberately tagged as themselves, in joking fashion.

Therefore, this system is definitely not perfect, as the questions at the end pointed out.  Unfortunately, Alex had passed, rather quickly, over an important point.  The intent of the system, in Facebook’s opinion, was to reduce the amount of account spam sent via accounts that had been compromised.  In that regard, the system probably works very well.  False logins get challenged.  Some of the challenges are false positives.  The photo system is a means of allowing a portion (a fairly large portion, probably) of users to recover their accounts quickly.  For the remaining accounts, there are other means to recover the account, even though these are more time-consuming for both Facebook and the user.  This system does reduce the total amount of time spent by both users (in the aggregate, even if individual users may feel hard done by) and Facebook.

Share

REVIEW: “Liars and Outliers: Enabling the Trust that Society Needs to Thrive”, Bruce Schneier

BKLRSOTL.RVW   20120104

“Liars and Outliers: Enabling the Trust that Society Needs to Thrive”,
Bruce Schneier, 2012, 978-1-118-14330-8, U$24.95/C$29.95
%A   Bruce Schneier www.Schneier.com
%C   5353 Dundas Street West, 4th Floor, Etobicoke, ON   M9B 6H8
%D   2012
%G   978-1-118-14330-8 1-118-14330-2
%I   John Wiley & Sons, Inc.
%O   U$24.95/C$29.95 416-236-4433 fax: 416-236-4448 www.wiley.com
%O  http://www.amazon.com/exec/obidos/ASIN/1118143302/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/1118143302/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/1118143302/robsladesin03-20
%O   Audience n+ Tech 2 Writing 3 (see revfaq.htm for explanation)
%P   365 p.
%T   “Liars and Outliers: Enabling the Trust that Society Needs to
Thrive”

Chapter one is what would ordinarily constitute an introduction or preface to the book.  Schneier states that the book is about trust: the trust that we need to operate as a society.  In these terms, trust is the confidence we can have that other people will reliably behave in certain ways, and not in others.  In any group, there is a desire in having people cooperate and act in the interest of all the members of the group.  In all individuals, there is a possibility that they will defect and act against the interests of the group, either for their own competing interest, or simply in opposition to the group.  (The author notes that defection is not always negative: positive social change is generally driven by defectors.)  Actually, the text may be more about social engineering, because Schneier does a very comprehensive job of exploring how confident we can be about trust, and they ways we can increase (and sometimes inadvertantly decrease) that reliability.

Part I explores the background of trust, in both the hard and soft sciences.  Chapter two looks at biology and game theory for the basics.  Chapter three will be familiar to those who have studied sociobiology, or other evolutionary perspectives on behaviour.  A historical view of sociology and scaling makes up chapter four.  Chapter five returns to game theory to examine conflict and societal dilemmas.

Schneier says that part II develops a model of trust.  This may not be evident at a cursory reading: the model consists of moral pressures, reputational pressures, institutional pressures, and security systems, and the author is very careful to explain each part in chapters seven through ten: so careful that it is sometimes hard to follow the structure of the arguments.

Part III applies the model to the real world, examining competing interests, organizations, corporations, and institutions.  The relative utility of the four parts of the model is analyzed in respect to different scales (sizes and complexities) of society.  The author also notes, in a number of places, that distrust, and therefore excessive institutional pressures or security systems, is very expensive for individuals and society as a whole.

Part IV reviews the ways societal pressures fail, with particular emphasis on technology, and information technology.  Schneier discusses situations where carelessly chosen institutional pressures can create the opposite of the effect intended.

The author lists, and proposes, a number of additional models.  There are Ostrom’s rules for managing commons (a model for self-regulating societies), Dunbar’s numbers, and other existing structures.  But Schneier has also created a categorization of reasons for defection, a new set of security control types, a set of principles for designing effective societal pressures, and an array of the relation between these control types and his trust model.  Not all of them are perfect.  His list of control types has gaps and ambiguities (but then, so does the existing military/governmental catalogue).  In his figure of the feedback loops in societal pressures, it is difficult to find a distinction between “side effects” and “unintended consequences.”  However, despite minor problems, all of these paradigms can be useful in reviewing both the human factors in security systems, and in public policy.

Schneier writes as well as he always does, and his research is extensive.  In part one, possibly too extensive.  A great many studies and results are mentioned, but few are examined in any depth.  This does not help the central thrust of the book.  After all, eventually Schneier wants to talk about the technology of trust, what works, and what doesn’t.  In laying the basic foundation, the question of the far historical origin of altruism may be of academic philosophical interest, but that does not necessarily translate into an
understanding of current moral mechanisms.  It may be that God intended us to be altruistic, and therefore gave us an ethical code to shape our behaviour.  Or, it may be that random mutation produced entities that acted altruistically and more of them survived than did others, so the population created expectations and laws to encourage that behaviour, and God to explain and enforce it.  But trying to explore which of those (and many other variant) options might be right only muddies the understanding of what options actually help us form a secure society today.

Schneier has, as with “Beyond Fear” (cf. BKBYNDFR.RVW) and “Secrets and Lies” (cf. BKSECLIE.RVW), not only made a useful addition to the security literature, but created something of value to those involved with public policy, and a fascinating philosophical tome for the general public.  Security professionals can use a number of the models to assess controls in security systems, with a view to what will work, what won’t (and what areas are just too expensive to protect).  Public policy will benefit from examination of which formal structures are likely to have a desired effect.  (As I am finishing this review the debate over SOPA and PIPA is going on: measures unlikely to protect intellectual property in any meaningful way, and guaranteed to have enormous adverse effects.)  And Schneier has brought together a wealth of ideas and research in the fields of trust and society, with his usual clarity and readability.

copyright, Robert M. Slade   2011     BKLRSOTL.RVW   20120104

Share