The speed of “social” …

I made a posting on the blog.

Then I moved on to checking news, which I do via Twitter.  And, suddenly, there in my stream was a “tweet” that, fairly obviously, referred to my posting.  By someone I didn’t know, and had never heard of.  From Indonesia.

This blog now has an RSS feed.  Apparently a few people are following that feed.  And, seemingly, every time something gets posted here, it gets copied onto their blogs.

And, in at least one case, that post gets automatically (and programmatically) posted on Twitter.

I would never have known any of this, except that the posting I had made was in reference to something I had found via those stalwarts at the Annals of Improbable Research.  I had made reference to that fact in the first line.  The application used to generate the Twitter posting copies roughly the first hundred characters of the blog post, so the Improbable Research account (pretty much automatically) retweeted the programmed tweet of the blog posting that copied my original blog posting.  I follow Improbable Research on Twitter, so I got the retweet.

This set me to a little exploration.  I found, checking trackbacks, that every one of my postings was being copied to seven different blogs.  Blogs run by people of whom I’d never heard.  (Most of whom don’t seem to have any particular interest in infosec, which is rather odd.)

Well, this blog is public, and my postings are public, so I really can’t complain when the material goes public, even if in a rather larger way than I originally thought.  But it does underline the fact that, once posted on the Internet, it is very unsafe to assume that any information is confidential.  You can’t delete data once it has passed to machines beyond your control.

And it passes very, very fast.


REVIEW: “Steve Jobs”, Walter Isaacson


“Steve Jobs”, Walter Isaacson, 2011, 978-1-4104-4522-3
%A   Walter Isaacson
%C   27500 Drake Road, Farmington Hills, MI   48331-3535
%D   2011
%G   978-1-4104-4522-3 1451648537
%I   Simon and Schuster/The Gale Group
%O   248-699-4253 800-877-4253 fax: 800-414-5043
%O   Audience n+ Tech 1 Writing 2 (see revfaq.htm for explanation)
%P   853 p.
%T   “Steve Jobs”

I have read many fictional works that start off with a list of the cast of characters, but this is the first biography I’ve ever read that started in this way.

It is fairly obvious that Isaacson has done extensive research, talked to many people, and worked very hard in preparation for this book.  At the same time, it is clear that many areas have not been carefully analyzed.  Many Silicon Valley myths (such as the precise formulation of Moore’s Law, or John Draper’s status with regard to the Cap’n Crunch whistle) are retailed without ascertaining the true facts.  The information collected is extensive in many ways, but, in places (particularly in regard to Jobs’ earlier years) the writing is scattered and disjointed.  We have Jobs living with his girlfriend in a cabin in the hills, and then suddenly he is in college.

Material is duplicated and reiterated in many places.  Quotes are frequently repeated word-for-word in relation to different situations or circumstances, so the reader really cannot know the original reference.  There are also contradictions: we are told that Jobs could not stand a certain staffer, but 18 pages later we are informed that the same person often enthralled Jobs.  (Initially, this staffer is introduced as having been encountered in 1979, but it is later mentioned that he worked for Jobs and Apple as early as 1976.)  At one point we learn that an outside firm designed the Mac mouse: four pages further on we ascertain that it was created internally by Apple.  The author seems to have accepted any and all input, perspectives, and stories without analysis or assessment of where the truth might lie.

It is possible to do a biography along a timeline.  It is possible to do it on a thematic basis.  Isaacson follows a timeline, but generally only covers one subject during any “epoch.”  From the first time Jobs sees a personal computer until he is dismissed from Apple, this is less of a biography and more the story of the development of the company.  There is a short section covering the birth of Jobs’ daughter, we hear of the reality distortion field, and terse mentions of vegan diets, motorcycles, stark housing, and occasional girlfriends, but almost nothing of Jobs away from work.  (Even in covering Apple there are large gaps: the Lisa model is noted as an important development, but then is never really described.)

In fact, it is hard to see this book as a biography.  It reads more like a history of Apple, although with particular emphasis on Jobs.  There are sidetrips to his first girlfriend and daughter, NeXT, Pixar, miscellaneous girlfriends, his wife and kids, Pixar again, and then cancer, but by far the bulk of the book concentrates on Apple.

The “reality distortion field” is famous, and mentioned often.  Equally frequently we are told of a focused and unblinking stare, which Jobs learned from someone, and practiced as a means to intimidate and influence people.  Most people believe that the person who “doesn’t blink” is the dominant personality, and therefore the one in charge.  It is rather ironic that research actually refutes this.  Studies have shown that, when two people meet for the first time, it is actually the dominant personality that “blinks first” and looks away, almost as a signal that they are about to dominate the conversation or interaction.  Both “the field” and “the stare” seem to tell the same story: they are tricks of social engineering which can have a powerful influence, but which are based on an imperfect understanding of reality and people, don’t work with everyone, and can have very negative consequences.

(The chapters on Jobs’ fight with cancer are possibly the most telling.  For anyone who has the slightest background in medicine it will be apparent that Jobs didn’t know much in that field, and that he made very foolish and dangerous decisions, flying in the face of all advice and any understanding of nutrition and biology.)

Those seeking insight into the character that built a major corporation may be disappointed.  Like anybody else, Jobs is a study in contradictions: the seduction with charm and vision, then belittlement and screaming at people; the perfectionist who obsessed on details, but was supposedly a visionary at the intersection of the arts and technology who made major decisions based on intuitive gut feelings with little or no information or analysis; the amaterialistic ascetic who made a fortune selling consumer electronics and was willing to con people to make money; the Zen meditator who never seemed to achieve any calm or patience; the man who insisted that “honesty” compelled him to abuse friends and colleagues, but who was almost pathological in his secrecy about himself and the company; and the creative free-thinker who created the most closed and restricted systems extent.

There is no attempt to find the balance point for any of these dichotomies.  As a security architect I can readily agree with the need for high level design to drive all aspects of the construction of a system: a unified whole always works better and more reliably.  Unfortunately for that premise, there are endless examples of Jobs demanding, at very late points in the process, that radically new functions be included.  Then there is Jobs’ twin assertions that the item must be perfect, but that ship dates must be met.  One has to agree with Voltaire: the best is the enemy of the good, and anyone trying to be good, fast, *and* cheap may succeed a time or two, but is ultimately headed for failure.

Several times Isaacson repeats an assertion from Jobs that money is not important: it is merely recognition of achievements, or a resource that enables you to make great products.  The author does not seem to understand that an awful lot of money is also another resource, one that allows you to make mistakes.  He only vaguely admits that Jobs made some spectacular errors.

The book is not a hagiography.  Isaacson is at pains to point out that he notes Jobs’ weaknesses of character and action.  At the same time, Isaacson is obviously proud of being a personal friend, and, I suspect, does not realize that, while he may mention Jobs’ flaws, he also goes to great lengths to excuse them.

Was Steve Jobs a great man?  He was the driving force behind a company which had, for a time, the largest market capitalization of any publicly traded company.  He was also, by pretty much all accounts, an arrogant jerk.  He had a major influence on the design of personal electronics, although his contribution to personal computing was mostly derivative.  We are conventionally used to saying that people like Napoleon, Ford, and Edison are great, even thought they might have been better at social engineering than the softer people skills.  By this measure Jobs can be considered great, although not by the standards by which we might judge Ghandi, Mother Teresa, and the Dalai Lama (which is rather ironic, considering Jobs’ personal philosophy).

Those who hold Jobs, Apple, or both, in awe will probably be delighted to find a mass of stories and trivia all in one place.  Those who want to know the secrets of building a business empire may find some interesting philosophies, but will probably be disappointed: the book tends to take all positions at once.  For those who have paid much attention to Apple, and Jobs’ career, there isn’t much here that is novel.  As Jobs himself stated to a journalist, “So, you’ve uncovered the fact that I’m an *sshole.  Why is that news?”

Having all of the material in one book does help to clarify certain issues.  Personally, I have always fought with the Macs I used, struggling against the lock step conformity they enforced.  It was only in reviewing this work that it occurred to me that Apple relies upon a closed system that makes Microsoft appear open by comparison.  So, I guess, yes, there is at least one insight to be gained from this volume.

copyright, Robert M. Slade   2011     BKSTVJBS.RVW 20111224


Webcast? No, thanks.

I had a call today inviting me to “attend” a Webcast.  The vendor makes security products.  I work in security.  I won’t be attending.

I never watch Webcasts.  In the early days I watched a couple.  I even presented on a couple of Webcasts, at the request of different parties.  I’ve subsequently made it a policy that I never do attend.

Webcasts are a waste of time.

Back before Webcasts we had podcasts.  I could partially see a reason for podcasts.  After all, as the name implies, you were supposed to download them and play them on your iPod or other MP3 player.  You could do this on your commute, or while out jogging, or any other time that you would spend plugged into your device.  So, on what would normally be mental downtime, you could be learning something.

For me, personally, there were a couple of problems with this.  The first was that I never bothered to get an MP3 player.  The second was that I always had books to read (and review) on my commute.

Yes, I know I could download the podcasts to my computer, and listen to them that way.  But a) when I’m at the computer, that’s not downtime, and b) I can read faster than you can talk.  So listening to a podcast is still a waste of time.  Sorry to my friends who do podcasts, and I know you are sincerely trying to help (and probably do), but even if you are podcasting on an interesting topic, somebody else has written about it.  And I can search and read faster than you can talk.

The same goes, in spades, for Webcasts.  In addition, whereas podcasts are generally done by people who have something to say, but no money or major resources to say it with, Webcasts are done by vendors.  And trade rags (who are, these days, desperately trying to find something to make themselves relevant again).  And erstwhile conference and event promoters, who see it as a cheaper way to get the (advertising) message out.

And that’s part of the trouble.  It is cheaper.  A Webcast, no matter how many frills you add (sometimes turning it into a “virtual trade show” or “virtual conference”) is going to be cheaper than renting a hotel facility, flying actual people in, laying on coffee (at hotel catering prices), and advertising your event to get people to come.  If a vendor or promoter has to do all that, they figure they might as well make sure someone is going to listen to the pitch.  So they are much more likely to make sure that a) the speaker knows how to speak, b) the speaker has something to say, and c) there is some actual useful content in addition to the straight sales pitch.

But a Webcast is cheap.  No rooms to rent, no people to move, no coffee to buy.  Even if you have to rent Webcast time, it’s a pittance compared to all of that.

And, hey! you can get people to attend more easily!  From the comfort of their own desk or computer!  Wherever they are (as long as they can get to a hotspot)!  All they have to do is register and log in!

(I’ll come back to that.)

So, if a Webcast is cheap and easy, why take any trouble with it?  Drag in anyone as a speaker.  There are probably any number of people who think they could make it big on the lecture circuit if only they got a little “exposure.”  Sorry, but I’ve run into too many people who thought I should be glad to write or speak for them just for the “exposure.”  They only people who are going to fall for that are those who don’t get asked because a) they have nothing to say, and b) they can’t say it anyway.  Even if you do find someone with something to say, why give them time (and possibly money) to research or prepare anything?  As a matter of fact, if you are a trade rag you’ve probably got lots of people who are willing to be expert on anything, with a moment’s notice.

Like I said, I attended a few.  It very quickly became apparent not only that I can read faster than Webcasters can speak, but that almost none of them had anything worth saying anyway.

(I’ll make an exception for TED.  Not even all of TED.  But definitely Cliff Stoll.)

So, I made it a policy never to attend Webcasts.  We are all busy.  My time is finite.  Webcasts are a waste of time.

I said I’d come back to this business of it being easy to get people to come.  Recently I’ve noticed that the Webcasts aren’t just being advertised.  Now there are bribes and come-ons.  You can win an iPod, or an IPad, if you register and attend.  You can get a USB drive if you attend.  You can get a Starbucks card or an Amazon giftcard.  (I am somewhat reminded of the studies where they offered people chocolate bars or Starbucks cards if the people would tell their passwords.)  And not only am I getting multiple invites to the event, but now telemarketers are calling to “invite” me to attend.  They are starting to sound desperate.

Do you think it just vaguely possible that other people are starting  to think Webcasts are a waste of time?  Maybe a large number of other people?


Who is responsible?

Galina Pildush ended her LTE presentation with a very good question:”Who is responsible for LTE security?  Is it the users? UE (User Equipment, handsets and devices) manufacturers and vendors?  Network providers, operators and telcos?”

It’s a great question, and one that needs to be applied to every area of security.

In the SOHO (Small Office/Home Office) and personal sphere, it has long been assumed that it’s the user who is responsible.  Long assumed, but possibly changing.  Apple, particularly with the iOS/iPhone/iPad lines, has moved toward a model where the vendor (Apple) locks down the device, and only allows you certain options for software and services.  Not all of them are produced or provided by Apple, but Apple gets vetting responsibilities and rights.

The original “user” responsibility model has not worked particularly well.  Most people don’t know how to protect themselves in regard to information security.  Malware and botnets are rampant.  In the “each man for himself” situation, many users do not protect themselves, with significant consequences for the computing environment as a whole.  (For years I have been telling corporations that they should support free, public security awareness training.  Not as advertising or for goodwill, but as a matter of self defence.  Reducing the number of infected users out there will reduce the level of risk in computing and communication as a whole.)

The “vendor” model, in Apple’s case (and Microsoft seems to be trying to move in that direction) has generated a reputation, at least, for better security.  Certainly infection and botnet membership rates appear to be lower in Macs than in Windows machines, and lower still in the iOS world.  (This, of course, does nothing to protect the user from phishing and other forms of fraud.  In fact, it would be interesting to see if users in a “walled garden” world were slightly more susceptible to fraud, since they were protected from other threats and had less need to be paranoid.)  The model also has significant advantages as a business model, where you can lock in users (and providers, as well), so it is obviously going to be popular with the vendors.

Of course, there are drawbacks, for the vendors, in this model.  As has been amply demonstrated in current mobile network situations, providers are very late in rolling out security patches.  This is because of the perception that the entire responsibility rests with the provider, and they want to test every patch to death before releasing it.  If that role falls to the vendors, they too will have to take more care, probably much more care, to ensure software is secure.  And that will delay both patch cycles and version cycles.

Which, of course, brings us to the providers.  As noted, there is already a problem here with patch releases.  But, after all, most attacks these days are network based.  Proper filtering would not only deal with intrusions and malware, but also issues like spam and fraud.  After all, if the phishing message never reaches the user, the user can’t be defrauded.

So, in theory, we can make a good case that the provider would be the most effective locus for responsibility for security.  They have the ability to address the broadest range of security issues.  In reality, of course, it wouldn’t work.

In the first place, all kinds of users wouldn’t stand for it.  Absent a monopoly market, any provider who tried to provide total security protection, would a) incur prohibitively heavy costs (putting pressure on their competitive rates), and b) lose a bunch of users who would resent restrictions and limitations.  (At present, of course, me know that many providers can get away with being pretty cavalier about security.)  The providers would also, as now, have to deal with a large range of devices.  And, if responsibility is lifted from the vendors, the situation will only get worse: vendors will be able to role out new releases and take even less care with testing than they do now.

In practical terms, we probably can’t, and shouldn’t decide this question.  All parties should take some responsibility, and all parties should take more than they do currently.  That way, everybody will be better off.  But, as Bruce Schneier notes, there are always going to be those who try and shirk their responsibility, relying on the fact that others will not.


CanSecWest evolving

Let me say, right off the top, that I love CanSecWest.  I am tired of “vendor” conferences, where you pay outrageous fees for the privilege of sitting through a bunch of sales pitches.  At least CanSecWest has real information, as opposed to virtual information.  (Virtual information: n. – marketing spiel dressed up as actual technical information.)

However, today I have had the same conversation half a dozen times, with half a dozen different people.  (And I didn’t initiate any of them.)  The conversation generally starts out the same way, with the question, “Don’t you think CanSecWest is getting … less technical?”

Now, it may simply be a one year glitch, or a random set of presentations.  But, yes, I have to agree that, so far, the presentations have not been as great as in the past.

Still good, don’t get me wrong.  But we started with a pres on the boot process, nicely technical, but nothing new.  Pen testing, which was also pretty generic, and nothing new.  The social authentication, yes, that was good.  Recent research, and some neat ideas to play with.  The piece on APT was mostly about finding bugs in Shockwave/Flash.  The piece on Duqu and Stuxnet was good, but I feel a bit used: Kaspersky obviously timed it to present the same thing at both CanSecWest and CeBit at the same time.  Good PR hack, but a bit of a cheat in terms of “unique” presentations that haven’t been done before.

The smartphone rooting had some interesting points, but didn’t demonstrate real exploits.  The probing of mobile networks had more real and technical data.  (Marcia Hoffman’s presentation was, last year, a personal disappointment to me, since I’m a legal and forensics guy, and expected more depth.  However, when I thought about it, I realized that she had nailed the target audience: these guys are geeks, and need the basic warnings about what they are doing.  She did just as well this year.)

The iOS exploitation pres was interesting but covered material that was covered quite well last year.  The piece on hardware-involved attacks boiled down to “if you don’t take care with your programming, hardware can do things you don’t expect: be a careful programmer.”  The Near Field Communications (NFC) item did raise some interesting points about the careless acceptance of chip codes, but most of it was little different from discussions about RFID or validating input in general.  (The HDMI was pretty cool.)

Like I said, I love CanSecWest, and I’m still going to come.  I may complain a bit about these presentations, but they are still far above anything you are likely to find at a vendor conference.  But I hope the program gets back to some solid, new technical stuff.

(By the way, if you want more details about the specific presentations, the slides are generally made available in an archive shortly after the event closes.  It’ll probably be this link, or something similar.)


Smartphone vulnerabilities

Scott Kelly, platform architect at Netflix, gets to look at a lot of devices.  In depth.  He’s got some interesting things to say about smartphones.  (At CanSecWest.)

First of all, with a computer, you are the “tenant.”  You own the machine, and you can modify it any way you want.

On a smartphone, you are not the only tenant, and, in fact, you are the second tenant.  The provider is the first.  And where you may want to modify and customize it, the provider may not want you to.  They’d like to lock you in.  At the very least, they want to maintain some control because you are constantly on their network.

Now, you can root or jailbreak your phone.  Basically, that means hacking your phone.  Whether you do that or not, it does mean that your device is hackable.

(Incidentally, the system architectures for smartphones can be hugely complex.)

Sometimes you can simply replace the firmware.  Providers try to avoid doing that, sometimes looking at a secure boot system.  This is usually the same as the “trusted computing” (digital signatures that verify back to a key that is embedded in the hardware) or “trusted execution” (operation restriction) systems.  (Both types were used way back in AV days of old.)  Sometimes the providers ask manufacturers to lock the bootloader.  Attackers can get around this, sometimes letting a check succeed and then doing a swap, or attacking write protection, or messing with the verification process as it is occurring.  However, you can usually find easier implementation errors.  Sometimes providers/vendors use symmetric enryption: once a key is known, every device of that model is accessible.  You can also look at the attack surface, and with the complex architectures in smartphones the surface is enormous.

Vendors and providers are working towards trusted modules and trustzones in mobile devices.  Sometimes this is virtual, sometimes it actually involves hardware.  (Personally, I saw attempts at this in the history of malware.  Hardware tended to have inherent advantages, but every system I saw had some vulnerability somewhere.)

Patching has been a problem with mobile devices.  Again, the providers are going to be seen as responsible for ongoing operation.  Any problems are going to be seen as their fault.  Therefore, they really have to be sure that any patch they create is absolutely bulletproof.  It can’t create any problems.  So there is always going to be a long window for any exploit that is found.  And there are going to be vulnerabilities to exploit in a system this complex.  Providers and vendors are going to keep trying to lock systems.

(Again, personally, I suspect that hacks will keep on occurring, and that the locking systems will turn out to be less secure than the designers think.)

Scott is definitely a good speaker, and his slides and flow are decent.  However, most of the material he has presented is fairly generic.  CanSecWest audiences have come to expect revelations of real attacks.


Social authentication and solar storms

Well, I thought it was ironic that the biggest solar storm in years is hitting the earth tonight … while CanSecWest is on …

So far today we have had talks on security (and vulnerabilities) during the boot process, a talk on pen testing (and the presenter seemed to be alternately talking about how to choose a pen tester, and how to do pen testing), and social authentication.

The social authentication talk was by Alex Rice from Facebook.  He noted that, even though Facebook only challenges a small fraction of a percent of logins, given the user base that means more then a million every day.  When a login is challenged, a standard response has been the good old “security questions”: mother’s maiden name, birthdate, and other pieces of information that might not be too hard for someone intent on breaking into your account to find out.

Alex went through the limitations of security questions, and then moved to other possibilities.  Security questions comes under the heading of “things you know,” so they looked at “things you have.”  For example, you have to have an email address, so there is the possibility of a challenge sent to your email.  (Google, of course, figures that everyone in the world has a cell phone that can receive text messages.)

Recently, Facebook has started to use the photos that people post on their pages, particularly those that have been tagged.  Basically, if your login gets challenged, you will be shown a series of pictures, and you should be able to identify who is, or is not, in the picture, out of your list of friends.  This is the subject of a blog post noting that it isn’t perfect.

There are additional problems.  As the post notes, the situation is less than ideal if you have a huge number of “friends.”  (As Bruce Schneier’s new book notes, if you have more than 150 friends, you probably aren’t friends with many of them.)  Even if you do know your “friends,” there is nothing to say that any given picture of them will be recognizable.  In fact, since the system relies on tagging, there are going to be pictures of weird objects that people have deliberately tagged as themselves, in joking fashion.

Therefore, this system is definitely not perfect, as the questions at the end pointed out.  Unfortunately, Alex had passed, rather quickly, over an important point.  The intent of the system, in Facebook’s opinion, was to reduce the amount of account spam sent via accounts that had been compromised.  In that regard, the system probably works very well.  False logins get challenged.  Some of the challenges are false positives.  The photo system is a means of allowing a portion (a fairly large portion, probably) of users to recover their accounts quickly.  For the remaining accounts, there are other means to recover the account, even though these are more time-consuming for both Facebook and the user.  This system does reduce the total amount of time spent by both users (in the aggregate, even if individual users may feel hard done by) and Facebook.



I always look forward to CanSecWest.  Usually cutting edge stuff.  Some of it incomprehensible, some of it interesting, some very entertaining.

Every year is a different program, of course, but every year has some changes to the setup, as well.  This year is the latest I can remember them opening the doors to the ballroom/theatre, but it was also one of the earliest in terms of starting the registrations.

Between getting registered and getting in to the room there’s some time to mingle.  It was nice to see old friends, including some whose presence surprised me.  Also nice to meet a few new people.

It’s always interesting who you run into at CanSecWest.  One friend, on his first time out, sat down next to a nice chap, and got to talking.  Said chap shortly asked my friend to mind his computer for the next little while, then walked up to the front and was introduced as the next speaker, Charlie Miller.  Miller is a bit of a fixture at the event, as he tends to win the Pwn2Own contest year after year.  You’ve probably heard of his escapades in other areas.

(As I say, lots of nice people here.  However, this is definitely a conference on the geek end of the spectrum, and you can often count on running into people whose “people skills” could use work.  It makes starting up conversations with strangers possibly more surprising than usual  :-)

Not as many vendors at CanSecWest as at other conferences.  Some interesting ones this year: one company doing managed security and reselling.  They are looking at the enterprise and government market, and I suspect they may be at the wrong conference.  Adobe is here: they seem to be trying to overcome the perception of them as the problem.  A number of companies appear to be primarily interested in recruiting.  (If they are really serious about it, they might have sent more technical people: a number of tables are staffed by sales people who are having difficulty talking to the geeks they are trying to recruit.)

As usual, getting connected to the CanSecWest network was a bit of a challenge, but I seem to be on now  :-)



Graham Cluley, of Sophos and Naked Security, posted some reminiscences of the Michelangelo virus.  It brought back some memories and he’s told the story well.

I hate to argue with Graham, but, first off, I have to note that the twentieth anniversary of Micelangelo is not tomorrow (March 6, 2012), but today, March 5.  That’s because 1992 was, as this year is, a leap year.  Yes, Michelangelo was timed to go off on March 6th every year, but, due to a shortcut in the code (and bugs in normal comptuer software), it neglected to factor in leap years.  Therefore, in 1992 many copies went off a day early, on March 5th.

March 5th, 1992, was a rather busy day for me.  I was attending a seminar, but kept getting called out to answer media enquiries.

And then there was the fact that, after all that work and information submitted to the media in advance, and creating copies of Michelangelo on a 3 1/2″ disk (it would normally only infect 5 1/4″s) so I could test it on a safe machine (and then having to recreate the disk when I accidentally triggered the virus), it wasn’t me who got my picture in the paper.  No, it was my baby brother, who a) didn’t believe in the virus, but b) finally, at literally the eleventh hour (11 pm on March 4th) decided to scan his own computer (with a scanner I had given to him), and, when he found he was infected, raised the alarm with his church, and scanned their computers as well.  (Must have been pretty close to midnight, and zero hour, by that time.)  That’s a nice human interest story so he got his picture in the paper.  (Not that I’m bitter, mind you.)

I don’t quite agree with Graham as to the infection rates.  I do know that, since this was the first time we (as the nascent antivirus community) managed to get the attention of the media in advance, there were a great many significant infections that were cleaned off in time, before the trigger date.  I recall notices of thousands of machines cleaned off in various institutions.  But, in a sense, we were victims of our own success.  Having got the word out in advance, by the trigger date most of the infections had been cleaned up.  So, yes, the media saw it as hype on our part.  And then there was the fact that a lot of people had no idea when they got hit.  I was told, by several people, “no, we didn’t get Michelangelo.  But, you know, it’s strange: our computer had a disk failure on that date …”  That was how Michelangelo appeared, when it triggered.

I note that one of the comments wished that we could find out who created the virus.  There is strong evidence that it was created in Taiwan.  And, in response to a posting that I did at the time, I received a message from someone, from Taiwan, who complained that it shouldn’t be called “Michelangelo,” since the real name was “Stoned 3.”  I’ve always felt that only the person who wrote that variant would have been that upset about the naming …


Grandparent scams are still around

No, I didn’t get hit.  Someone even older than I am (although he’s got fewer grandchildren) almost got hit.  Twice.

This is not a stupid guy.  He still runs his own investment company.  A few years ago he recounted a weird call that he thought came from one of his grandkids-in-law.  Everybody who heard the story recognized it for what it was, particularly when it was determined that the grandkid-in-law in question, who does travel a lot, had never made the call.  The scam was explained to the call recipient.

Well, today he sent his whole family into an uproar.  He’d got another call, and seems to have been one phone call away from wiring off $2500.  Fortunately, a couple of family members determined what was happening, in time, and explained the situation.  Again.

Let me try to explain a bit how this works.

The recipient gets a phone call.

Recipient: [answers phone] Hello?
Caller: Grandpa?
Recipient: Is that you, Mary?

OK, at this point the caller knows that whoever answered the phone has a grandchild named “Mary.”  Allow me to theorize why this is the grandparent scam.  Many (older) people may have more grandchildren than they have children, so the odds of hitting someone with a grandchild of the same gender as the caller increase.  Also, most people don’t know their grandchildren, and the doings of said grandchildren, as well as that of their kids.

The fraudsters who make these calls may do it at random, or they may have bought calling lists of those with interests, demographic information, or medication purchasing patterns indicating that they are older.  These calls may also be targeted at geographic areas with a higher proportion of retired people.

Caller: Yeah.
Recipient: Gee, your voice sounds different/that doesn’t sound like you.
Caller: I’m not feeling well/have a cold.

This answer serves two purposes: it explains the differences in voice (although it might not explain an Asian, Russian, or south Asian accent), and also calls on the sympathy of the recipient.

R: That’s too bad.
C: Yeah.  Actually grandpa, [caller launches into story of woe, ending with a requirement for funds for a) medical services, b) legal fees or bail, c) documentation expenses, d) travel expenses, e) etc.]

This particular call added a few refinements.  The explanation ended with a plea that this situation was all very embarrassing, and so would grandpa please not let anyone know.  Grandpa apparently complied with this request: grandpa did do some checking with the family to try and find the grandchild, and, coyly, wouldn’t tell what was going on.  It wasn’t until a) a few family members had had frustrating attempts to find out what the calls were about, and b) the grandchild had been found (well, but busy with an event for one of the great-grandchildren) that the whole story came out.

Fortunately, there was a second refinement.  In an attempt to add verisimilitude to an otherwise bald and unconvincing narrative, the caller had finished with the statement that a lawyer would be calling to make arrangements for the money transfer.  Lawyers are trustworthy, of course (no laughing down there in the cheap seats), and the fact that you can no more authenticate the person who claims to be a lawyer than the person who claims to be your grandchild is probably lost on most people.

I say “fortunately,” because the calls grandpa made to the family probably blocked the second call, at least for a while.  It is quite possible that the scammer or scammers, hitting a busy signal a couple of times, suspected that calls were being made to family, and cut their losses rather than carry on with a now likely compromised scam.

This is not a new scam.  It’s a variation on 419s, which were, themselves, variations on the postal mail based “Nigerian” scam, which was a variation on the “Spanish prisoner” scam going back to the middle ages (which was probably based on a similar and even older scam).  But the scam is widespread, targets generosity rather than greed, and seems to be somewhat resistant to eradication.

Please raise this issue with, and explain it to, older friends and relatives.  The media reports on the scam tend to be minimal, and don’t explain how easy, and likely, it is to give away information in what you think is normal conversation.

Oh, and just to conclude, when you answer the phone and someone says “Grandpa?” or “Grandma?”, the correct answer is, “Who’s speaking, please?”


Paper safe

I first saw this, appropriately enough, on Improbable Research.  It’s appropriate, because, when you see it, first it makes you laugh.  Then it makes you think.

This guy has created a paper safe.  Yeah, you got that right.  A safe, made out of paper.  No, not special paper: plain, ordinary paper, the kind you have in your recycling bin.  He’s even posted a video on YouTube showing how it works.

Right, so everyone’s going to have a good laugh, yes?  Paper isn’t going to provide any protection, right?  It’s a useless oddity, of interest only to those with an interest in origami, and more free time on their hands than any security professional is likely to get.

Except, then you start thinking about it (if you are any kind of security pro.)  First off, it’s a nice illustration of at least one form of combination lock.  And then you realize that the lock is going to be useless unless it’s obscured.  So that brings up the topic of maybe security-by-obscurity does have a function sometimes.

Then you start thinking that maybe it isn’t great as a preventive control, but it sure works as a detective control.  Yeah, it’s easy to smash and get out whatever was in there.  But it’ll sure be obvious if you do.

So that brings up different types of controls, and the reasons you might want different controls in different situations, and whether some perfectly adequate controls may be a) overkill, or b) useless under certain conditions.

It’s not just a cute toy.  It’s pretty educational, too.  No, I’m not going to keep my money in it.  But it makes you think …



C. S. Lewis wrote some pretty good sci-fi, some excellent kids books (which Disney managed to ruin), and my favourite satire on the commercialization of Christmas.  Most people, though, would know him as a writer on Christianity.  So I wonder if Stephen Harper and Vic Toews have ever read him.  One of the things he wrote was, “It would be better to live under robber barons than under omnipotent moral busybodies.”

Bill C-30 (sometimes known as the Investigating and Preventing Criminal Electronic Communications Act, sometimes known as the Protecting Children from Internet Predators Act, and sometimes just known as “the online spy bill”) is heading for Committee of the Whole.  This means that some aspects of it may change.  But it’ll have to change an awful lot before it becomes even remotely acceptable.

It’s got interesting provisions.  Apparently, as it stands, it doesn’t allow law enforcement to actually demand access to information without a warrant.  But it allows the to request a “voluntary” disclosure of information.  Up until, law enforcement could request voluntary disclosure, of course.  But then the ISP would refuse pretty much automatically, since to provide that information would breach PIPEDA.  So now that automatic protection seems to be lost.

(Speaking of PIPEDA, there is this guy who is being tracked by who-knows-who.  The tracking is being done by an American company, so they can’t be forced by Canadian authorities to say who planted the bug.  But the data is being passed by a Canadian company, Kore Wireless.  And, one would think, they are in breach of PIPEDA, since they are passing personal information to a jurisdiction [the United States] which basically has no legal privacy protection at all.)

It doesn’t have to be law enforcement, either.  The Minister would have the right to authorize anyone his (or her) little heart desires to request the information.

Then there is good old Section 14, which allows the government to make ISPs install any kind of surveillance equipment the government wants, impose confidentiality on anything (like telling people they are being surveilled), or impose any other operational requirements they want.

Now, our Minister of Public Safety (doesn’t that name just make you feel all warm and 1984ish?), Vic Toews, has been promoting the heck out of the bill, even though he actually doesn’t know what it says or what’s in it.  He does know that if you oppose C-30 you are on the side of child pornographers.  This has led a large number of Canadians to cry out #DontToewsMeBro and to suggest that it might be best to #TellVicEverythingRick Mercer, Canada’s answer to Jon Stewart and famous for his “rants,” has weighed in on the matter.

As far as Toews and friends are concerned, the information that they are after, your IP address and connections, are just like a phone book.  Right.  Well, a few years back Google made their “phone book” available.  Given the huge volume of information, even though it was anonymized, researchers were able to aggregate information, and determine locations, names, interests, political views, you name it.  Hey, Google themselves admit that they can tell how you’re feeling.

But, hey, maybe I’m biased.  Ask a lawyer.  Michael Geist knows about these things, and he’s concerned.  (Check out his notes on the new copyright bill, too.

The thing is, it’s not going to do what the government says it’s going to do.  This will not automatically stop child pornography, or terrorism, or online fraudsters.  Hard working, diligent law enforcement officers are going to do that.  There are a lot of those diligent law enforcement officers out there, and they are doing a sometimes amazing job.  And I’d like to help.  But providing this sort of unfiltered data dump for them isn’t going to help.  It’s going to hurt.  The really diligent ones are going to be crowded out by lazy yahoos who will want to waltz into ISP offices and demand data.  And then won’t be able to understand it.

How do I know this?  It’s simple.  Anyone who knows about the technology can tell you that this kind of access is 1) an invasion of privacy, and 2) not going to help.  But this government is going after it anyway.  In spite of the fact that the Minister responsible doesn’t know what is in the bill.  (Or so he says.)  Why is that?  Is it because they are wilfully evil?  (Oh, the temptation.)  Well, no.  These situations tend to be governed by Hanlon’s Rzor which, somewhat modified, states that you should never attribute to malicious intent, that which can adequately explained by assuming pure, blind, pig-ignorant stupidity.



REVIEW: “Liars and Outliers: Enabling the Trust that Society Needs to Thrive”, Bruce Schneier

BKLRSOTL.RVW   20120104

“Liars and Outliers: Enabling the Trust that Society Needs to Thrive”,
Bruce Schneier, 2012, 978-1-118-14330-8, U$24.95/C$29.95
%A   Bruce Schneier
%C   5353 Dundas Street West, 4th Floor, Etobicoke, ON   M9B 6H8
%D   2012
%G   978-1-118-14330-8 1-118-14330-2
%I   John Wiley & Sons, Inc.
%O   U$24.95/C$29.95 416-236-4433 fax: 416-236-4448
%O   Audience n+ Tech 2 Writing 3 (see revfaq.htm for explanation)
%P   365 p.
%T   “Liars and Outliers: Enabling the Trust that Society Needs to

Chapter one is what would ordinarily constitute an introduction or preface to the book.  Schneier states that the book is about trust: the trust that we need to operate as a society.  In these terms, trust is the confidence we can have that other people will reliably behave in certain ways, and not in others.  In any group, there is a desire in having people cooperate and act in the interest of all the members of the group.  In all individuals, there is a possibility that they will defect and act against the interests of the group, either for their own competing interest, or simply in opposition to the group.  (The author notes that defection is not always negative: positive social change is generally driven by defectors.)  Actually, the text may be more about social engineering, because Schneier does a very comprehensive job of exploring how confident we can be about trust, and they ways we can increase (and sometimes inadvertantly decrease) that reliability.

Part I explores the background of trust, in both the hard and soft sciences.  Chapter two looks at biology and game theory for the basics.  Chapter three will be familiar to those who have studied sociobiology, or other evolutionary perspectives on behaviour.  A historical view of sociology and scaling makes up chapter four.  Chapter five returns to game theory to examine conflict and societal dilemmas.

Schneier says that part II develops a model of trust.  This may not be evident at a cursory reading: the model consists of moral pressures, reputational pressures, institutional pressures, and security systems, and the author is very careful to explain each part in chapters seven through ten: so careful that it is sometimes hard to follow the structure of the arguments.

Part III applies the model to the real world, examining competing interests, organizations, corporations, and institutions.  The relative utility of the four parts of the model is analyzed in respect to different scales (sizes and complexities) of society.  The author also notes, in a number of places, that distrust, and therefore excessive institutional pressures or security systems, is very expensive for individuals and society as a whole.

Part IV reviews the ways societal pressures fail, with particular emphasis on technology, and information technology.  Schneier discusses situations where carelessly chosen institutional pressures can create the opposite of the effect intended.

The author lists, and proposes, a number of additional models.  There are Ostrom’s rules for managing commons (a model for self-regulating societies), Dunbar’s numbers, and other existing structures.  But Schneier has also created a categorization of reasons for defection, a new set of security control types, a set of principles for designing effective societal pressures, and an array of the relation between these control types and his trust model.  Not all of them are perfect.  His list of control types has gaps and ambiguities (but then, so does the existing military/governmental catalogue).  In his figure of the feedback loops in societal pressures, it is difficult to find a distinction between “side effects” and “unintended consequences.”  However, despite minor problems, all of these paradigms can be useful in reviewing both the human factors in security systems, and in public policy.

Schneier writes as well as he always does, and his research is extensive.  In part one, possibly too extensive.  A great many studies and results are mentioned, but few are examined in any depth.  This does not help the central thrust of the book.  After all, eventually Schneier wants to talk about the technology of trust, what works, and what doesn’t.  In laying the basic foundation, the question of the far historical origin of altruism may be of academic philosophical interest, but that does not necessarily translate into an
understanding of current moral mechanisms.  It may be that God intended us to be altruistic, and therefore gave us an ethical code to shape our behaviour.  Or, it may be that random mutation produced entities that acted altruistically and more of them survived than did others, so the population created expectations and laws to encourage that behaviour, and God to explain and enforce it.  But trying to explore which of those (and many other variant) options might be right only muddies the understanding of what options actually help us form a secure society today.

Schneier has, as with “Beyond Fear” (cf. BKBYNDFR.RVW) and “Secrets and Lies” (cf. BKSECLIE.RVW), not only made a useful addition to the security literature, but created something of value to those involved with public policy, and a fascinating philosophical tome for the general public.  Security professionals can use a number of the models to assess controls in security systems, with a view to what will work, what won’t (and what areas are just too expensive to protect).  Public policy will benefit from examination of which formal structures are likely to have a desired effect.  (As I am finishing this review the debate over SOPA and PIPA is going on: measures unlikely to protect intellectual property in any meaningful way, and guaranteed to have enormous adverse effects.)  And Schneier has brought together a wealth of ideas and research in the fields of trust and society, with his usual clarity and readability.

copyright, Robert M. Slade   2011     BKLRSOTL.RVW   20120104


Forcing your users to write down their passwords

This sums up everything that is wrong with the “password policy” theme. From the t-mobile web site:

T-Mobile Password Policy

There is no way any reasonable person can choose a password that fits this policy AND can be remembered (note how they are telling you that you CANNOT use special characters. So users now have to bend according to the lowest common denominator of their bad back-end database routine and their bad password policy).

I’m sure some high-paid consultant convinced the T-MO CSO that stricter password policy is the answer to all their security problems. Reminds me of a story about an air-force security chief that claimed 25% increase in security by making mandatory password length 10 characters instead of 8, but I digress.

Yes, I know my habitat. No security executive ever got fired for making the user’s experience more difficult. All in the name of security. Except it’s both bad security and bad usability (which, incidentally, correlate more often than not, despite what lazy security ‘experts’ might let you believe.

I’ve ranted about this before.


“The next big cyber attack will be worse than 9/11″

Except it won’t be.

I’m assuming the reporter who quoted the statement in the title as coming from the Davos “Global Shapers” group was trying to make his own headline. Hey, that works (I even used it myself). But this is not the first time we’ve been warned about the Armageddon that is cyber terror, and it’s time somebody called bullshit on it.

Now don’t get me wrong, I’m not mother Teresa. I work in IT security, and have been known to scare people now and then with the “this is what might happen to you if you won’t fix your security”.  Most times I’d like to think I was calling it the way I saw it, but I’m sure more than once people that were listening to me thought I was exaggerating. And probably much more than once, I was. But this is not an “exaggeration”. It’s something totally different.

Have you been terrorized? I bet you have. You don’t have to know someone who was killed by a suicide bomber; it’s enough if you think back to when the school bully tried to take your lunch. That was terrifying. And terrorizing. You thought bodily harm will come to you, and this is why “terror” works so well: it’s scary.

Is ‘cyber terror’ really that scary? Well, lets compare. Many of us have been “victims” of cyber terror. You probably visited a web site that was defaced by political hacker wannabes. Were you terrorized?

We’ve all heard about the attacks in Estonia. That was the most effective cyberwar to date. But did anyone died? Lets compare it to the war (actual war) in Georgia. Again Russia clashing with a neighbor, but this time people died; lost their homes; forced to move their lives elsewhere. I’m sorry, but that’s not the equivalent of having to reformat your computer or losing facebook connectivity for 24 hours.

War is war: people die, suffer bodily harm, have their lives change. I’m not against the term “cyber-war” or “cyber-terror”, but can we put it in proportion please?

So no, the next ‘cyber war’ or ‘cyber terror’ attack won’t be worse like 9/11. It won’t be even mildly comparable to 9/11. Unless it kills thousands of people, in which case there will be nothing “cyber” about it.


Publish and/or perish

A new study notes that “scholarly” academic journals are forcing the people who want to publish in them (the journals) to add useless citations to the published articles.  OK, this may sound like more academic infighting.  (Q: Why are academic fights so bitter? A: Because the stakes are so small.)  But it actually has some fairly important implications.  These journals are, in many eyes, the elite of the publishing world.  These articles are peer-reviewed, which means they are tested by other experts before they are even published.  Therefore, many assume that if you see it in one of these journals, it’s so.

(The system isn’t pefect.  Ralph Merkle couldn’t get his paper on asymmetric encryption published because a reviewer felt it “wasn’t interesting.”  The greatest advance in crypto in 4,000 years and it wasn’t interesting?)

These are, of course, the same journals that are lobbying to have their monopoly business protected by the “Research Works Act,” among other things.  (The “Resarch Works Act” is a whole different kettle of anti-[open access|public domain|open source] intellectual property irrationality.)

I was, initially, a bit surprised by the study on forced citations.  After all, these are, supposedly, the guardians of truth.  Yes, OK, that’s naive.  I’ve published in magazines myself.  Not the refereed journals, perhaps: I’m not important enough for that.  But I’ve been asked for articles by many periodicals.  They’ve had all kinds of demands.  The one that I find most consistently annoying is that I provide graphics and images.  I’m a resarcher, not a designer: I don’t do graphics.  But, I recall one time that I was asked to do an article on a subject dear to my heart.  Because I felt strongly about it, I put a lot of work into it.  I was even willing to give them some graphics.  And, in the end, they rejected it.

Not enough quotes from vendors.

This is, of course, the same motivation as the forced citations.  In any periodical, you make money by selling advertising.  In trade rags, the ease of selling advertsing to vendors is determined by how much space you’ve given them in the supposed editorial content.  In the academic journals, the advertising rates are determined by the number of citations to articles you’ve previously published.  Hence, in both cases, the companies with the advertising budgets get to determine what actually gets published.

(As long as we’ve here, I have one more story, somewhat loosely related to publishing, citation, open access, and intellectual property.  On another occasion, I was asked to do a major article cluster on the history of computer viruses.  This topic is very dear to my heart, and I put in lots of time, lots of work, and even lots of graphics.  This group of articles got turned down as well.  The reason given in that case was that they had used a Web-based plagiarism detector on the stuff, and found that it was probably based on materials already on the net.  Well, of course it was.  I wrote most of the stuff on that topic that is already on the Web …)