Trust me, I didn’t look right as I typed this …

‘Lying eyes’ are a myth – looking to the right DOESN’T mean you are fibbing.

“Many psychologists believe that when a person looks up to their right they are
likely to be telling a lie.  Glancing up to the left, on the other hand, is said to
indicate honesty.

“Co-author Dr Caroline Watt, from the University of Edinburgh, said: ‘A large
percentage of the public believes that certain eye movements are a sign of lying,
and this idea is even taught in organisational training courses. … The claimed link
between lying and eye movements is a key element of neuro-linguistic
programming.

“According to the theory, when right-handed people look up to their right they
are likely to be visualising a ‘constructed’ or imagined event.  In contrast when
they look to their left they are likely to be visualising a ‘remembered’ memory.
For this reason, when liars are constructing their own version of the truth, they
tend to look to the right.”

“Psychologist Prof Wiseman, from the University of Hertfordshire, said: ‘The
results of the first study revealed no relationship between lying and eye
movements, and the second showed that telling people about the claims made by
NLP practitioners did not improve their lie detection skills.’

However, this study raises a much more serious question.  These types of “skills” are being extensively taught (and sought) by law enforcement and other agencies.  How many investigations are being misdirected and delayed by false suppositions based on NLP “techniques”?  More disturbingly, how many people are being falsely accused, dismissed, or charged due to the same questionable “information”?  (As I keep telling my seminars, when you get sidetracked into pursuing the wrong suspect, the real culprit is getting away free.)

(I guess we’ll have to stop watching “The Mentalist” now …)

Share

Apple and “identity pollution”

Apple has obtained a patent for “identity pollution,” according to the Atlantic.

I am of not just two, but a great many minds about this.  (OK, admit it: you always knew I was schizophrenic.)

First off, I wonder how in the world they got a patent for this.  OK, maybe there isn’t much in the way of prior art, but the idea can’t possibly be called “non-obvious.”  Even before the rise of “social networking” I was prompting friends to use my “loyalty” shopping cards, even the ones that just gave discounts and didn’t get you points.  I have no idea what those stores think I buy, and I don’t much care, but I do know that they have very little about my actual shopping patterns.

In our advice to the general population in regard to Internet and online safety in general, we have frequently suggested a) don’t say too much about yourself, and b) lie.  Isn’t this (the lying part) exactly what Apple is doing?

In similar fashion, I have created numerous socmed accounts which I never intended to use.  A number of them are simply unpopulated, but some contain false information.  I haven’t yet gone to the point of automating the process, but many others have.  So, yet another example of the US patent office being asleep (Rip-Van-Winkle-level asleep) at the technological switch.

Then there is the utility of the process.  Yes, OK, we can see that this might (we’ll come back to the “might”) help protect your confidentiality.  How can people find the “you” in all the garbage?  But what is true for advertisers, spammers, phishers, and APTers is also true for your friends.  How will the people who you actually *want* to find you, find the true you among all the false positives?

(Here is yet another example of the thre “legs” of the security triad fighting with each other.  We have endless examples of confidentiality and availability working against each other: now we have confidentiality and integrity at war.  How do you feel, in general, about Apple recommending that we creating even more garbage on the Internet than is already there?)

(Or is the fact that it is Apple that is doing this somehow appropriate?)

OK, then, will this work?  Can you protect the confidentiality of your real information with automated false information?  I can see this becoming yet another spam/anti-spam, CAPTCHA/CAPTCHA recognition, virus/anti-virus arms race.  An automated process will have identifiable signs, and those will be detected and used to ferret out the trash.  And then the “identity pollution” (a new kind of “IP”?) will be modified, and then the detection will be modified …

In th meantime, masses of bandwidth and storage will be consumed.  Socnet sites will be filled with meaningless accounts.  Users of socmed sites will be forced to spend even more time winnowing out those accounts not worth following.  Socnet companies will be forced to spend more on storage and determination of false accounts.  Also, their revenues will be cut as advertises realize that “targetted” ads will be less targetted.

Of course, Apple will be free to create a social networking site.  They already have created pieces of such.  And Apple can guarantee that Apple product users can use the site without impedance of identity pollution.  And, since Apple owns the patent, nobody else will be able to pollute identities on the Apple socnet site.

(And if Apple believes that, I have a bridge to sell them …)

Share

LinkeDin!

No!  I’m *not* asking for validation to join a security group on LinkedIn!

Apparently several million passwords have been leaked in an unsalted file, and multiple entities are working on cracking them, even as we speak.  (Type?)

So, odds are “low but significant” that your LinkedIn account password may have been cracked.  (Assuming you have a LinkedIn account.)  So you’d better change it.

And you might think about changing the password on any other accounts you have that use the same password.  (But you’re all security people, right?  You’d *never* use the same password on multiple accounts …)

Share

Flaming certs

Today is Tuesday for me, but it’s not “second Tuesday,” so it shouldn’t be patch Tuesday.  But today my little netbook, which is set just to inform me when updates are available, informed me that it had updated, but I needed to reboot to complete the task, and, if I didn’t do anything in the next little while it was going to reboot anyway.

Yesterday, of course, wasn’t patch Tuesday, but all my machines set to “go ahead and update” all wanted to update on shutdown last night.

This is, of course, because of Flame (aka Flamer, aka sKyWIper) has an “infection” module that messes with Windows/Microsoft Update.  As I understand it, there is some weakness in the update process itself, but the major problem is that Flame “contains” and uses a fake Microsoft digital certificate.

You can get some, but not very much, information about this from Microsoft’s Security Response Center blog.  (Early mentionLater.)

You can get more detailed information from F-Secure.

It’s easy to see that Microsoft is extremely concerned about this situation.  Not necessarily because of Flame: Flame uses pretty old technology, only targets a select subset of systems, and doesn’t even run on Win7 64-bit.  But the fake cert could be a major issue.  Once that cert is out in the open it can be used not only for Windows Update, but for “validating” all kinds of malware.  And, even though Flame only targets certain systems, and seems to be limited in geographic extent, I have pretty much no confidence at all that the blackhat community hasn’t already got copies of it.  (The cert doesn’t necessarily have to be contained in the Flame codebase, but the structure of the attack seems to imply that it is.)  So, the only safe bet is that the cert is “in the wild,” and can be used at any time.

(Just before I go on with this, I might say that the authors of Flame, whoever they may be, did no particularly bad thing in packaging up a bunch of old trojans into one massive kit.  But putting that fake cert out there was simply asking for trouble, and it’s kind of amazing that it hasn’t been used in an attack beofre now.)

The first thing Microsoft is doing is patching MS software so that it doesn’t trust that particular cert.  They aren’t giving away a lot of detail, but I imagine that much midnight oil is being burned in Redmond redoing the validation process so that a fake cert is harder to use.  Stay tuned to your Windows Update channel for further developments.

However, in all of this, one has to wonder where the fake cert came from.  It is, of course, always possible to simply brute force a digital signature, particularly if you have a ton of validated MS software, and a supercomputer (or a huge botnet), and mount a birthday (collision) attack.  (And everyone is assuming that the authors of Flame have access to the resources of a nation-state.  Or two …)  Now the easier way is simply to walk into the cert authority and ask for a couple of Microsoft certs.  (Which someone did one time.  And got away with it.)

But then, I was thinking.  In the not too distant past, we had a whole bunch of APT attacks (APT being an acronym standing for “we were lazy about our security, but it really isn’t our fault because these attackers didn’t play fair!”) on cert authorities.  And the attacks got away with a bunch of valid certs.

OK, we think Flame is possibly as much a five years in the wild, and almost certainly two years.  But it is also likely that there were updates during the period in the wild, so it’s hard to say, right off the top, which parts of it were out there for how long.

And I just kind of wonder …

Share

Flame on!

I have been reading about the new Flame (aka Flamer, aka sKyWIper) “supervirus.”

[AAaaaarrrrrrggggghhhh!!!!!!!!  Sorry.  I will try and keep the screaming, in my "outside voice," to a minimum.]

From the Telegraph:

This “virus” [1] is “20 times more powerful” than any other!  [Why?  Because it has 20 times more code?  Because it is running on 20 times more computers?  (It isn't.  If you aren't a sysadmin in the Middle East you basically don't have to worry.)  Because the computers it is running on are 20 times more powerful?  This claim is pointless and ridiculous.]

[I had it right the first time.  The file that is being examined is 20 megabytes.  Sorry, I'm from the old days.  Anybody who needs 20 megs to build a piece of malware isn't a genius.  Tight code is *much* more impressive.  This is just sloppy.]

It “could only have been created by a state.”  [What have you got against those of us who live in provinces?]

“Flame can gather data files, remotely change settings on computers, turn on computer microphones to record conversations, take screen shots and copy instant messaging chats.”  [So?  We had RATs that could do that at least a decade ago.]

“… a Russian security firm that specialises in targeting malicious computer code … made the 20 megabyte virus available to other researchers yesterday claiming it did not fully understand its scope and said its code was 100 times the size of the most malicious software.”  [I rather doubt they made the claim that they didn't understand it.  It would take time to plow through 20 megs of code, so it makes sense to send it around the AV community.  But I still say these "size of code" and "most malicious" statements are useless, to say the least.]

It was “released five years ago and had infected machines in Iran, Israel, Sudan, Syria, Lebanon, Saudi Arabia and Egypt.”  [Five years?  Good grief!  This thing is a pretty wimpy virus!  (Or self-limiting in some way.)  Even in the days of BSIs and sneakernet you could spread something around the world in half a year at most.]

“If Flame went on undiscovered for five years, the only logical conclusion is that there are other operations ongoing that we don’t know about.”  [Yeah.  Like "not reproducing."]

“The file, which infects Microsoft Windows computers, has five encryption algorithms,”  [Gosh!  The best we could do before was a couple of dozen!]  “exotic data storage formats”  [Like "not plain text."]  “and the ability to steal documents, spy on computer users and more.”  [Yawn.]

“Components enable those behind it, who use a network of rapidly-shifting “command and control” servers to direct the virus …”  [Gee!  You mean like a botnet or something?]

 

Sorry.  Yes, I do know that this is supposed to be (and probably is) state-sponsored, and purposefully written to attack specific targets and evade detection.  I get it.  It will be (marginally) interesting to see what they pull out of the code over the next few years.  It’s even kind of impressive that someone built a RAT that went undetected for that long, even though it was specifically built to hide and move slowly.

But all this “supervirus” nonsense is giving me pains.

 

[1] First off, everybody is calling it a “virus.”  But many reports say they don’t know how it got where it was found.  Duh!  If it’s a virus, that’s kind of the first issue, isn’t it?

Share

Howto: Phish HSBC credit card numbers

Like many other people, I try helping developing countries when I can. So to help boost GDP in Eastern Europe and Africa (or ‘redistribute the wealth’ if you will) here’s a quick tutorial that will help scammers get HSBC customers’ credit card numbers. All the steps below are done by the real HSBC, so you don’t even need to “fool” anyone.

An HSBC customer who has gone through this process before won’t be able to distinguish between you and the real HSBC. Customer that has not been through this process certainly won’t know better anyway. In fact, you can do it to HSBC employees and they won’t know.

All you need is a toll-free number for them to call (feel free to forward it to Nigeria). The nice thing about HSBC is that the process below is identical to how the real HSBC asks customers for information. In other words: HSBC is training their customers to follow this path. I propose a new term for HSBC’s method of breeding phish: spowning (spawn+p0wn).

Step 1:

Prepare an email that looks like:

Dear :

As a service to our customers and in an effort to protect their HSBC Premier  MasterCard  account, we are attempting to confirm recent charge activity or changes to the account.

Please contact the HSBC Premier Fraud Servicing Center to validate the activity at 1-888-206-5963 within the Continental United States. If you are calling from outside the United States, please call us collect at 716-841-7755.

If the activity is unauthorized, we will be able to close the account and reissue both a new account number and cards. Please use the Subject Reference Number below, when calling.

At HSBC, the security of our customer’s accounts has always been, and will continue to be a high priority. We appreciate your business and regret any inconvenience this may have caused you.

Sincerely,

Security & Fraud Risk HSBC USA

Alert ID Number :  10917558

Note:  Emails sent to this repository will go unmonitored.  Please do not reply to this email. —————————————– ************************************************************** This e-mail is confidential. It may also be legally privileged. If you are not the addressee you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return e-mail. Internet communications cannot be guaranteed to be timely, secure, error or virus-free. The sender does not accept liability for any errors or omissions. ************************************************************** “SAVE PAPER – THINK BEFORE YOU PRINT!”

Step 2:

Replace the phone numbers with your own. The above are HSBC’s.

Don’t worry about the ‘alert ID’. Just make something up. Unlike other credit cards, the caller (me, in this case) can’t use the alert ID to confirm this is really HSBC.

Step 3:

Blast this email. You’re bound to reach plenty of HSBC card holders. The rest you don’t care about anyway.

Main perk: Before the customer gets to speak to a human they need to enter full credit card number and 4 digit SSN. So even the most lazy scammer can at least get those.

For the overachieving scammers, have a human answer and ask for  Card expiration and Full name on the card before agreeing to answer any other questions from the customer. This is all standard procedure at HSBC so customers shouldn’t be suspicious.

Oh, and if the customer who happens to be a security blogger tries to authenticate you back, tell them to hang up and call the number on the back of their card. That will shut them up.

At HSBC, the security of our customer’s accounts has always been, and will continue to be a high priority.

If it really was, you wouldn’t make me such an easy target for scammers. But thanks for playing.

 

Share

REVIEW: “Dark Market: CyberThieves, CyberCops, and You”, Misha Glenny

BKDRKMKT.RVW 20120201

“Dark Market: CyberThieves, CyberCops, and You”, Misha Glenny, 2011,
978-0-88784-239-9, C$29.95
%A   Misha Glenny
%C   Suite 801, 110 Spadina Ave, Toronto, ON Canada  M5V 2K4
%D   2011
%G   978-0-88784-239-9 0-88784-239-9
%I   House of Anansi Press Ltd.
%O   C$29.95 416-363-4343 fax 416-363-1017 www.anansi.ca
%O  http://www.amazon.com/exec/obidos/ASIN/0887842399/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/0887842399/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0887842399/robsladesin03-20
%O   Audience n Tech 1 Writing 2 (see revfaq.htm for explanation)
%P   296 p.
%T   “Dark Market: CyberThieves, CyberCops, and You”

There is no particular purpose stated for this book, other than the vague promise of the subtitle that this has something to do with bad guys and good guys in cyberspace.  In the prologue, Glenny admits that his “attempts to assess when an interviewee was lying, embellishing or fantasising and when an interviewee was earnestly telling the truth were only partially successful.”  Bear in mind that all good little blackhats know that, if you really want to get in, the easiest thing to attack is the person.  Social engineering (which is simply a fancy way of saying “lying”) is always the most effective tactic.

It’s hard to have confidence in the author’s assessment of security on the Internet when he knows so little of the technology.  A VPN (Virtual Private Network) is said to be a system whereby a group of computers share a single address.  That’s not a VPN (which is a system of network management, and possibly encryption): it’s a description of NAT (Network Address Translation).  True, a VPN can, and fairly often does, use NAT in its operations, but the carelessness is concerning.

This may seem to be pedantic, but it leads to other errors.  For example, Glenny asserts that running a VPN is very difficult, but that encryption is easy, since encryption software is available on the Internet.  While it is true that the software is available, that availability is only part of the battle.  As I keep pointing out to my students, for effective protection with encryption you need to agree on what key to use, and doing that negotiation is a non-trivial task.  Yes, there is asymmetric encryption, but that requires a public key infrastructure (PKI) which is an enormously difficult proposition to get right.  Of the two, I’d rather run a VPN any day.

It is, therefore, not particularly surprising that the author finds that the best way to describe the capabilities of one group of carders was to compare them to the fictional “hacking” crew from “The Girl with the Dragon Tattoo.”  The activities in the novel are not impossible, but the ability to perform them on demand is highly
unlikely.

This lack of background colours his ability to ascertain what is possible or not (in the technical areas), and what is likely (out of what he has been told).  Sticking strictly with media reports and indictment documents, Glenny does a good job, and those parts of the book are interesting and enjoyable.  The author does let his taste for mystery get the better of him: even the straight reportage parts of the book are often confusing in terms of who did what, and who actually is what.

Like Dan Verton (cf BKHCKDRY.RVW) and Suelette Dreyfus (cf. BKNDRGND.RVW) before him, Glenny is trying to give us the “inside story” of the blackhat community.  He should have read Taylor’s “Hackers” (cf BKHAKERS.RVW) first, to get a better idea of the territory.  He does a somewhat better job than Dreyfus and Verton did, since he is wise enough to seek out law enforcement accounts (possibly after reading Stiennon’s “Surviving Cyberwar,” cf. BKSRCYWR.RVW).

Overall, this work is a fairly reasonable updating of Levy’s “Hackers” (cf. BKHACKRS.RVW) of almost three decades ago.  The rise of the financial motivation and the specialization of modern fraudulent blackhat activity are well presented.  There is something of a holdover in still portraying these crooks as evil genii, but, in the main, it is a decent picture of reality, although it provides nothing new.

copyright, Robert M. Slade   2012    BKDRKMKT.RVW 20120201

Share

Who is responsible?

Galina Pildush ended her LTE presentation with a very good question:”Who is responsible for LTE security?  Is it the users? UE (User Equipment, handsets and devices) manufacturers and vendors?  Network providers, operators and telcos?”

It’s a great question, and one that needs to be applied to every area of security.

In the SOHO (Small Office/Home Office) and personal sphere, it has long been assumed that it’s the user who is responsible.  Long assumed, but possibly changing.  Apple, particularly with the iOS/iPhone/iPad lines, has moved toward a model where the vendor (Apple) locks down the device, and only allows you certain options for software and services.  Not all of them are produced or provided by Apple, but Apple gets vetting responsibilities and rights.

The original “user” responsibility model has not worked particularly well.  Most people don’t know how to protect themselves in regard to information security.  Malware and botnets are rampant.  In the “each man for himself” situation, many users do not protect themselves, with significant consequences for the computing environment as a whole.  (For years I have been telling corporations that they should support free, public security awareness training.  Not as advertising or for goodwill, but as a matter of self defence.  Reducing the number of infected users out there will reduce the level of risk in computing and communication as a whole.)

The “vendor” model, in Apple’s case (and Microsoft seems to be trying to move in that direction) has generated a reputation, at least, for better security.  Certainly infection and botnet membership rates appear to be lower in Macs than in Windows machines, and lower still in the iOS world.  (This, of course, does nothing to protect the user from phishing and other forms of fraud.  In fact, it would be interesting to see if users in a “walled garden” world were slightly more susceptible to fraud, since they were protected from other threats and had less need to be paranoid.)  The model also has significant advantages as a business model, where you can lock in users (and providers, as well), so it is obviously going to be popular with the vendors.

Of course, there are drawbacks, for the vendors, in this model.  As has been amply demonstrated in current mobile network situations, providers are very late in rolling out security patches.  This is because of the perception that the entire responsibility rests with the provider, and they want to test every patch to death before releasing it.  If that role falls to the vendors, they too will have to take more care, probably much more care, to ensure software is secure.  And that will delay both patch cycles and version cycles.

Which, of course, brings us to the providers.  As noted, there is already a problem here with patch releases.  But, after all, most attacks these days are network based.  Proper filtering would not only deal with intrusions and malware, but also issues like spam and fraud.  After all, if the phishing message never reaches the user, the user can’t be defrauded.

So, in theory, we can make a good case that the provider would be the most effective locus for responsibility for security.  They have the ability to address the broadest range of security issues.  In reality, of course, it wouldn’t work.

In the first place, all kinds of users wouldn’t stand for it.  Absent a monopoly market, any provider who tried to provide total security protection, would a) incur prohibitively heavy costs (putting pressure on their competitive rates), and b) lose a bunch of users who would resent restrictions and limitations.  (At present, of course, me know that many providers can get away with being pretty cavalier about security.)  The providers would also, as now, have to deal with a large range of devices.  And, if responsibility is lifted from the vendors, the situation will only get worse: vendors will be able to role out new releases and take even less care with testing than they do now.

In practical terms, we probably can’t, and shouldn’t decide this question.  All parties should take some responsibility, and all parties should take more than they do currently.  That way, everybody will be better off.  But, as Bruce Schneier notes, there are always going to be those who try and shirk their responsibility, relying on the fact that others will not.

Share

LTE Cloud Security

LTE.  Even the name is complex: Long-Term Evolution of Evolved Universal Terrestrial Radio Access Network

All LTE phones (UE, User Equipment) are running servers.  Multiple servers.  (And almost all are unsecured at the moment.)

Because of the proliferation of protocols (GSM, GPRS, CDMA, additional 3 and 4G, and now LTE), the overall complexity of the mobile/cell cloud is growing.

LTE itself is fairly complex.  The Protocol Reference Model contains at least the GERAN User Plane, UTRAN User Plane, and E-UTRAN User Plane (all with multiple components) as well as the control plane.  A simplified model of a connection request involves at least nine messages involving six entities, with two more sitting on the sides.  The transport layer, SCTP, has a four-way, rather than two-way, handshake.  (Hence the need for all those servers.)  Basically, though, LTE is IP, but a fairly complex set of additional protocols, as opposed to the old PSTN.  The old public telephone network was a walled garden which few understood.  Just about all the active blackhats today understand IP, and it’s open.  It’s protected by Diameter, but even the Diameter implementation was loopholes.  It has a tunnelling protocol, GTP (GPRS Tunnelling Protocol), but, like very many tunnelling protocols, GTP does not provide confidentiality or integrity protection.

Everybody wants to the extra speed, functions, interconnection abilities, and apps.  But all the functionality means a much larger attack surface.  The total infrastructure involved in LTE is more complex.  Maybe nobody can know it all.  But they can know enough to start messing with it.  From a simple DoS to DDoS, false billing, disclosure of data, malware, botnets of the UEs, spam, SMS trojans, even run down batteries, you name it.

As with VoIP before it, we are rolling our known data vulnerabilities, and known voice/telco/PBX vulnerabilities, into one big insecurity.

Share

Smartphone vulnerabilities

Scott Kelly, platform architect at Netflix, gets to look at a lot of devices.  In depth.  He’s got some interesting things to say about smartphones.  (At CanSecWest.)

First of all, with a computer, you are the “tenant.”  You own the machine, and you can modify it any way you want.

On a smartphone, you are not the only tenant, and, in fact, you are the second tenant.  The provider is the first.  And where you may want to modify and customize it, the provider may not want you to.  They’d like to lock you in.  At the very least, they want to maintain some control because you are constantly on their network.

Now, you can root or jailbreak your phone.  Basically, that means hacking your phone.  Whether you do that or not, it does mean that your device is hackable.

(Incidentally, the system architectures for smartphones can be hugely complex.)

Sometimes you can simply replace the firmware.  Providers try to avoid doing that, sometimes looking at a secure boot system.  This is usually the same as the “trusted computing” (digital signatures that verify back to a key that is embedded in the hardware) or “trusted execution” (operation restriction) systems.  (Both types were used way back in AV days of old.)  Sometimes the providers ask manufacturers to lock the bootloader.  Attackers can get around this, sometimes letting a check succeed and then doing a swap, or attacking write protection, or messing with the verification process as it is occurring.  However, you can usually find easier implementation errors.  Sometimes providers/vendors use symmetric enryption: once a key is known, every device of that model is accessible.  You can also look at the attack surface, and with the complex architectures in smartphones the surface is enormous.

Vendors and providers are working towards trusted modules and trustzones in mobile devices.  Sometimes this is virtual, sometimes it actually involves hardware.  (Personally, I saw attempts at this in the history of malware.  Hardware tended to have inherent advantages, but every system I saw had some vulnerability somewhere.)

Patching has been a problem with mobile devices.  Again, the providers are going to be seen as responsible for ongoing operation.  Any problems are going to be seen as their fault.  Therefore, they really have to be sure that any patch they create is absolutely bulletproof.  It can’t create any problems.  So there is always going to be a long window for any exploit that is found.  And there are going to be vulnerabilities to exploit in a system this complex.  Providers and vendors are going to keep trying to lock systems.

(Again, personally, I suspect that hacks will keep on occurring, and that the locking systems will turn out to be less secure than the designers think.)

Scott is definitely a good speaker, and his slides and flow are decent.  However, most of the material he has presented is fairly generic.  CanSecWest audiences have come to expect revelations of real attacks.

Share

REVIEW: “Liars and Outliers: Enabling the Trust that Society Needs to Thrive”, Bruce Schneier

BKLRSOTL.RVW   20120104

“Liars and Outliers: Enabling the Trust that Society Needs to Thrive”,
Bruce Schneier, 2012, 978-1-118-14330-8, U$24.95/C$29.95
%A   Bruce Schneier www.Schneier.com
%C   5353 Dundas Street West, 4th Floor, Etobicoke, ON   M9B 6H8
%D   2012
%G   978-1-118-14330-8 1-118-14330-2
%I   John Wiley & Sons, Inc.
%O   U$24.95/C$29.95 416-236-4433 fax: 416-236-4448 www.wiley.com
%O  http://www.amazon.com/exec/obidos/ASIN/1118143302/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/1118143302/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/1118143302/robsladesin03-20
%O   Audience n+ Tech 2 Writing 3 (see revfaq.htm for explanation)
%P   365 p.
%T   “Liars and Outliers: Enabling the Trust that Society Needs to
Thrive”

Chapter one is what would ordinarily constitute an introduction or preface to the book.  Schneier states that the book is about trust: the trust that we need to operate as a society.  In these terms, trust is the confidence we can have that other people will reliably behave in certain ways, and not in others.  In any group, there is a desire in having people cooperate and act in the interest of all the members of the group.  In all individuals, there is a possibility that they will defect and act against the interests of the group, either for their own competing interest, or simply in opposition to the group.  (The author notes that defection is not always negative: positive social change is generally driven by defectors.)  Actually, the text may be more about social engineering, because Schneier does a very comprehensive job of exploring how confident we can be about trust, and they ways we can increase (and sometimes inadvertantly decrease) that reliability.

Part I explores the background of trust, in both the hard and soft sciences.  Chapter two looks at biology and game theory for the basics.  Chapter three will be familiar to those who have studied sociobiology, or other evolutionary perspectives on behaviour.  A historical view of sociology and scaling makes up chapter four.  Chapter five returns to game theory to examine conflict and societal dilemmas.

Schneier says that part II develops a model of trust.  This may not be evident at a cursory reading: the model consists of moral pressures, reputational pressures, institutional pressures, and security systems, and the author is very careful to explain each part in chapters seven through ten: so careful that it is sometimes hard to follow the structure of the arguments.

Part III applies the model to the real world, examining competing interests, organizations, corporations, and institutions.  The relative utility of the four parts of the model is analyzed in respect to different scales (sizes and complexities) of society.  The author also notes, in a number of places, that distrust, and therefore excessive institutional pressures or security systems, is very expensive for individuals and society as a whole.

Part IV reviews the ways societal pressures fail, with particular emphasis on technology, and information technology.  Schneier discusses situations where carelessly chosen institutional pressures can create the opposite of the effect intended.

The author lists, and proposes, a number of additional models.  There are Ostrom’s rules for managing commons (a model for self-regulating societies), Dunbar’s numbers, and other existing structures.  But Schneier has also created a categorization of reasons for defection, a new set of security control types, a set of principles for designing effective societal pressures, and an array of the relation between these control types and his trust model.  Not all of them are perfect.  His list of control types has gaps and ambiguities (but then, so does the existing military/governmental catalogue).  In his figure of the feedback loops in societal pressures, it is difficult to find a distinction between “side effects” and “unintended consequences.”  However, despite minor problems, all of these paradigms can be useful in reviewing both the human factors in security systems, and in public policy.

Schneier writes as well as he always does, and his research is extensive.  In part one, possibly too extensive.  A great many studies and results are mentioned, but few are examined in any depth.  This does not help the central thrust of the book.  After all, eventually Schneier wants to talk about the technology of trust, what works, and what doesn’t.  In laying the basic foundation, the question of the far historical origin of altruism may be of academic philosophical interest, but that does not necessarily translate into an
understanding of current moral mechanisms.  It may be that God intended us to be altruistic, and therefore gave us an ethical code to shape our behaviour.  Or, it may be that random mutation produced entities that acted altruistically and more of them survived than did others, so the population created expectations and laws to encourage that behaviour, and God to explain and enforce it.  But trying to explore which of those (and many other variant) options might be right only muddies the understanding of what options actually help us form a secure society today.

Schneier has, as with “Beyond Fear” (cf. BKBYNDFR.RVW) and “Secrets and Lies” (cf. BKSECLIE.RVW), not only made a useful addition to the security literature, but created something of value to those involved with public policy, and a fascinating philosophical tome for the general public.  Security professionals can use a number of the models to assess controls in security systems, with a view to what will work, what won’t (and what areas are just too expensive to protect).  Public policy will benefit from examination of which formal structures are likely to have a desired effect.  (As I am finishing this review the debate over SOPA and PIPA is going on: measures unlikely to protect intellectual property in any meaningful way, and guaranteed to have enormous adverse effects.)  And Schneier has brought together a wealth of ideas and research in the fields of trust and society, with his usual clarity and readability.

copyright, Robert M. Slade   2011     BKLRSOTL.RVW   20120104

Share

Forcing your users to write down their passwords

This sums up everything that is wrong with the “password policy” theme. From the t-mobile web site:

T-Mobile Password Policy

There is no way any reasonable person can choose a password that fits this policy AND can be remembered (note how they are telling you that you CANNOT use special characters. So users now have to bend according to the lowest common denominator of their bad back-end database routine and their bad password policy).

I’m sure some high-paid consultant convinced the T-MO CSO that stricter password policy is the answer to all their security problems. Reminds me of a story about an air-force security chief that claimed 25% increase in security by making mandatory password length 10 characters instead of 8, but I digress.

Yes, I know my habitat. No security executive ever got fired for making the user’s experience more difficult. All in the name of security. Except it’s both bad security and bad usability (which, incidentally, correlate more often than not, despite what lazy security ‘experts’ might let you believe.

I’ve ranted about this before.

Share

REVIEW: “Identity Management: Concepts, Technologies, and Systems”, Elisa Bertino/Kenji Takahashi

BKIMCTAS.RVW   20110326

“Identity Management: Concepts, Technologies, and Systems”, Elisa
Bertino/Kenji Takahashi, 2011, 978-1-60807-039-8
%A   Elisa Bertino
%A   Kenji Takahashi
%C   685 Canton St., Norwood, MA   02062
%D   2011
%G   978-1-60807-039-8 1-60807-039-5
%I   Artech House/Horizon
%O   800-225-9977 fax: +1-617-769-6334 artech@artech-house.com
%O  http://www.amazon.com/exec/obidos/ASIN/1608070395/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/1608070395/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/1608070395/robsladesin03-20
%O   Audience i- Tech 2 Writing 1 (see revfaq.htm for explanation)
%P   196 p.
%T   “Identity Management: Concepts, Technologies, and Systems”

Chapter one, the introduction, is a review of general identity related issues.  The definition of identity management, in chapter two, is thorough and detailed, covering the broad range of different types and uses of identities, the various loci of control, the identity lifecycle (in depth), and a very effective technical definition of privacy.  (The transactional attribute is perhaps defined too narrowly, as it could relate to non-commercial activities.)
“Fundamental technologies and processes” addresses credentials, PKI (Public Key Infrastructure), single sign-on, Kerberos, privacy, and anonymous systems in chapter three.  The level of detail varies: most of the material is specific with limited examples, while attribute federation is handled quite abstractly.  Chapter four turns to standards and systems, reviewing SAML (Security Assertion Markup Language), Web Services Framework, OpenID, Information Card-Based Identity Management (IC-IDM), interoperability, other prototypes, examples, and projects, with an odd digression into the fundamental confidentiality, integrity, and availability concepts.  Challenges are noted in chapter five, briefly examining usability, access control, privacy, trust management, interoperability (from the human, rather than machine, perspective, particularly expectations, experience, and jargon), and finally biometrics.

This book raises a number of important questions, and mentions many new areas of work and development.  For experienced security professionals needing to move into this area as a new field, it can serve as an introduction to the topics which need to be discussed.  Those looking for assistance with an identity management project will probably need to look elsewhere.

copyright, Robert M. Slade   2011     BKIMCTAS.RVW   20110326

Share

Publish and/or perish

A new study notes that “scholarly” academic journals are forcing the people who want to publish in them (the journals) to add useless citations to the published articles.  OK, this may sound like more academic infighting.  (Q: Why are academic fights so bitter? A: Because the stakes are so small.)  But it actually has some fairly important implications.  These journals are, in many eyes, the elite of the publishing world.  These articles are peer-reviewed, which means they are tested by other experts before they are even published.  Therefore, many assume that if you see it in one of these journals, it’s so.

(The system isn’t pefect.  Ralph Merkle couldn’t get his paper on asymmetric encryption published because a reviewer felt it “wasn’t interesting.”  The greatest advance in crypto in 4,000 years and it wasn’t interesting?)

These are, of course, the same journals that are lobbying to have their monopoly business protected by the “Research Works Act,” among other things.  (The “Resarch Works Act” is a whole different kettle of anti-[open access|public domain|open source] intellectual property irrationality.)

I was, initially, a bit surprised by the study on forced citations.  After all, these are, supposedly, the guardians of truth.  Yes, OK, that’s naive.  I’ve published in magazines myself.  Not the refereed journals, perhaps: I’m not important enough for that.  But I’ve been asked for articles by many periodicals.  They’ve had all kinds of demands.  The one that I find most consistently annoying is that I provide graphics and images.  I’m a resarcher, not a designer: I don’t do graphics.  But, I recall one time that I was asked to do an article on a subject dear to my heart.  Because I felt strongly about it, I put a lot of work into it.  I was even willing to give them some graphics.  And, in the end, they rejected it.

Not enough quotes from vendors.

This is, of course, the same motivation as the forced citations.  In any periodical, you make money by selling advertising.  In trade rags, the ease of selling advertsing to vendors is determined by how much space you’ve given them in the supposed editorial content.  In the academic journals, the advertising rates are determined by the number of citations to articles you’ve previously published.  Hence, in both cases, the companies with the advertising budgets get to determine what actually gets published.

(As long as we’ve here, I have one more story, somewhat loosely related to publishing, citation, open access, and intellectual property.  On another occasion, I was asked to do a major article cluster on the history of computer viruses.  This topic is very dear to my heart, and I put in lots of time, lots of work, and even lots of graphics.  This group of articles got turned down as well.  The reason given in that case was that they had used a Web-based plagiarism detector on the stuff, and found that it was probably based on materials already on the net.  Well, of course it was.  I wrote most of the stuff on that topic that is already on the Web …)

Share

Certified security awareness

A vendor speaking at a conference (is there any other kind of presentation at conferences these days?) has made a call for a new standard for information security awareness training.

” … the way to do this is via a new infosecurity standard that solely focuses on training and awareness and is delivered in the work environment”

Now, I’m all for security awareness.  I’m all for more security awareness.  I’m all for better security awareness.  I’m all for infosec departments to actually TRY security awareness (since they say often say, “well, if it was gonna have worked, it woulda worked by now” and never try it).

But, come on.  A new “standard”?

As the man[1] said, the wonderful thing about computer “standards” is that there are so many to choose from.

What are we going to certify?  Users?  “Sorry, you have been found to be too stupid to use a computer at work.  You are hereby issued this non-jailbroken iPad.”

No, undoubtedly he thinks we are going to “certify” the awareness materials themselves.  Good luck with that.

I’ve been a teacher for a lot of years.  I’ve also been a book reviewer for a lot of years.  And I’ve published books.  Trust me on this: a variant of Gresham’s Law is very active in the textbook and educational materials field.  Bad textbooks drive out good.  As a matter of fact, it’s even closer to Gresham: money drives out good textbooks and materials.  Publishers know there is a lot of money to be made in textbooks and training materials.  Publishers with a lot of money are going to use that money to advertise, create “exclusive” contracts, and otherwise ensure that they have the biggest share of the market.  The easiest way to do that is to publish as many titles as you can, as cheaply as you can.  “Cheaply” means you use contract writers, who can turn out 2-300 pages on anything, whether they know about it or not.

So, do you really think that, if someone starts making noise about a security awareness standard, the publishers won’t make absolutely certain that they’ve got control of the certification process?  That if someone comes up with an independent standard that they can withstand the financial pressures that large publishers can bring to bear?  That if someone creates an independent cert, and firmly holds to principles and standards, that the publishers won’t just create a competing cert, and advertise it much more than the independent cert can ever hope to?

After all, none of us can possibly think of any lousy security product with a lot of money behind it that can command a larger market share than a good, but independent, product, now can we?

[1] Well, maybe it was Andrew Tanenbaum, but maybe it was Grace Hopper.  Or Patricia Seybold.  Or Ken Olsen.

Share

Corporate social media rules

An item for discussion:

I’ve see this stuff in some recent reports of lawsuits.  First people started using social media, for social things.  Then corps decided that socmed was a great way to spam people without being accused of spamming.  Then corps suddenly realized, to their horror, that, on socmed, people can talk back.  And maybe alert other people to the fact that you a) don’t fulfill on your promises, b) make lousy products, c) provide lousy service, and d) so on.

Gloria ran into this today and asked me about the legalities of it.  I imagine that it has all the legality of any waiver: you can’t sign away your rights, and a waiver has slightly less value than the paper it’s printed on (or, slightly more, if a fraudster can copy your signature off it  [Sorry, I'm a professional paranoid.  My brain just works that way.]).

Anyway, what she ran into today (a Facebook page that was offering to let you in on a draw if you “liked” them) (don’t worry, we’ve already discussed the security problems of “likes”):

“We’re honoured that you’re a fan of [us], and we look forward to hearing what you have to say. To ensure a positive online experience for the entire community, we may monitor and remove certain postings. “Be kind and have fun” is the short version of our rules. What follows is the longer version of rules for posts, communications and general behaviour on [our] Facebook page:”

[fairly standard "we're nice people" marketing type bumpf - rms]

“The following should not be posted on [our] Facebook pages:”

Now, some of this is good:
“Unauthorized commercial communications (such as spam)
“Content meant to bully, intimidate or harass any user
“Content that is hateful, threatening, discriminatory, pornographic, or that
contains nudity or graphic or gratuitous violence
“Content that infringes or violates someone else’s rights or otherwise violates the law
“Personal, sensitive or financial information on this page (this includes but is not limited to email addresses, phone numbers, etc.)
“Unlawful or misleading posts”

Some of it is protecting their “brand”:
“Competitor material such as pictures, videos, or site links”

Some has to do with the fact that they are a franchise operation:
“Links to personal [agent] websites, or invitations from [agents] to connect with them privately”

But some it is limits freedom of expression:
“Unconstructive, negative or derogatory comments
“Repeat postings of unconstructive comments/statements”

And, of course, the kicker:
“[We] reserves the right to remove any postings deemed to be inappropriate or in violation of these rules.”

Now, it’s probably the case that they do have the right to manipulate the content on their site/page any way they want to.  But, how far can these “rules” go?

Share