REVIEW – “The Florentine Deception”, Carey Nachenberg

BKFLODEC.RVW   20150609

“The Florentine Deception”, Carey Nachenberg, 2015, 978-1-5040-0924-9,
%A   Carey Nachenberg
%C   345 Hudson Street, New York, NY   10014
%D   2015
%G   978-1-5040-0924-9 150400924X
%I   Open Road Distribution
%O   U$13.49/C$18.91
%O   Audience n+ Tech 3 Writing 2 (see revfaq.htm for explanation)
%P   321 p.
%T   “The Florentine Deception”

It gets depressing, after a while.  When you review a bunch of books on the basis of the quality of the technical information, books of fiction are disappointing.  No author seems interested in making sure that the technology is in any way realistic.  For every John Camp, who pays attention to the facts, there are a dozen Dan Browns who just make it up as they go along.  For every Toni Dwiggins, who knows what she is talking about, there are a hundred who don’t.

So, when someone like Carey Nachenberg, who actually works in malware research, decides to write a story using malicious software as a major plot device, you have to be interested.  (And besides, both Mikko Hypponen and Eugene Spafford, who know what they are talking about, say it is technically accurate.)

I will definitely grant that the overall “attack” is technically sound.  The forensics and anti-forensics makes sense.  I can even see young geeks with more dollars than sense continuing to play “Nancy Drew” in the face of mounting odds and attackers.  That a vulnerability can continue to go undetected for more than a decade would ordinarily raise a red flag, but Nachenberg’s premise is realistic (especially since I know of a vulnerability at that very company that went unfixed for seven years after they had been warned about it).  That a geek goes rock-climbing with a supermodel we can put down to poetic licence (although it may increase the licence rates).  I can’t find any flaws in the denouement.

But.  I *cannot* believe that, in this day and age, *anyone* with a background in malware research would knowingly stick a thumb/jump/flash/USB drive labelled “Florentine Controller” into his, her, or its computer.  (This really isn’t an objection: it would only take a couple of pages to have someone run up a test to make sure the thing was safe, but …)

Other than that, it’s a joy to read.  It’s a decent thriller, with some breaks to make it relaxing rather than exhausting (too much “one damn thing after another” gets tiring), good dialogue, and sympathetic characters.  The fact that you can trust the technology aids in the “willing suspension of disbelief.”

While it doesn’t make any difference to the quality of the book, I should mention that Carey is donating all author profits from sales of the book to charity:

copyright, Robert M. Slade   2015   BKFLODEC.RVW   20150609

REVIEW: “The Social Life of Information”, John Seely Brown/Paul Duguid

BKSCLFIN.RVW   20130124

“The Social Life of Information”, John Seely Brown/Paul Duguid, 2000,
0-87584-762-5, U$24.95
%A   John Seely Brown
%A   Paul Duguid
%C   60 Harvard Way, Boston MA   02163
%D   2000
%G   0-87584-762-5
%I   Harvard Business School Press
%O   U$25.95 617-495-6947 617-495-6700 617-495-6117 800-545-7685
%O   Audience n+ Tech 2 Writing 2 (see revfaq.htm for explanation)
%P   320 p.
%T   “The Social Life of Information”

The introduction is vague, but basically notes that those who approach information in a strictly technical or business sense risk failure by ignoring the social context in which information resides.  Information does not exist of itself, but is produced and consumed by people, and thus is a construct and artifact of our social environment.

Chapter one talks about information overload.  Bots are discussed in chapter two: not the botnets (simple programs distributed over multiple computers) that everyone agrees should be eliminated, but the range of software agents that we use without thinking.  The authors note that the interactions between these bots are inherently impossible to control, and the material prophecies the recent problems in content blocking such as affected the Hugo awards and Michelle Obama.  Chapter three examines various social issues of home (or non-office) -based work.  The difference between our processes, and the way people actually work, are addressed in chapter four.  A number of interesting ideas are raised, but it is (ironically) difficult to see how to put these into practice (rather than discussion of what we should do).  Chapter five turns to learning and knowledge management.  The authors assert that learning is primarily social, and note negative effects on business if this aspect is ignored, but actually say very little about learning or information.  Chapter six explores innovation in respect to the Internet and a global economy, noting that information is difficult to control in that it is both “sticky” (resistant to change) and “leaky” (incidental disclosures of “confidential” information abound).  The “background” of information is noted in chapter seven, with the authors examining the resilience of paper in the face of a determined effort to create the “paperless” office.  They note studies showing that “printing” out email seemed to automatically give the data greater weight.  (I wonder if this might have changed in today’s marketplace: sadly, a rather large proportion of people now seem to hold that *anything* found on the Internet, regardless of how silly, must be true.)  Chapter eight, entitled “Re-education,” discusses the changing nature of universities.

There is an afterword, “Beyond Information,” touching on miscellaneous points, particularly to do with copyright.

Despite a certain lack of structure or purpose to some of the sections, the writing is both clear and entertaining.  It also has that ineffable quality of readability, meaning that the reading is enjoyable even when the authors are not delivering specifically interesting information, or making a vital point in an argument.  It’s a joy simply to consume the text.

copyright, Robert M. Slade   2013   BKSCLFIN.RVW   20130124

AV is dead … again …

Antivirus software only catches 45% of malware attacks and is “dead”, according to a senior manager at Symantec.”

85.4% of statistic can be interpreted in the opposite way, and AV has been declared dead regularly since 1987.

Symantec “invented commercial antivirus software in the 1980s”?  That must come as news to the many companies, like Sophos, that I was reviewing long before Symantec bought out their first AV company.

“Dye told the Wall Street Journal that hackers increasingly use novel methods and bugs in the software of computers to perform attacks.”

There were “novel attacks” in 1986, and they got caught.  There have been novel attacks every year or so since, and they’ve been caught.  At the same time, lots of people get attacked and fail to detect it.  There’s never a horse that couldn’t be rode, and there’s never a rider that couldn’t be throwed.

“Malware has become increasingly complex in a post-Stuxnet world.”

So have computers.  Even before Stuxnet.  I think it was Grace Hopper who said that the reason it is difficult to secure complex systems is because they are complex systems.  (And she died a while back.)

Big Government vs Big Corp – which is worse?

A programmer has been banned from Google for life.

This appears to be kind of like those Kafka-esque errors that big government sometimes make [1] (and which reinforce the arguments against the “if you’re not doing anything wrong you don’t need privacy” position), with the added factor that there is absolutely nothing that can be done about it.

I suppose an individual programmer could bring civil suit against Google (and its undoubtedly huge population of lawyers) citing material damages for being forbidden from participating in the Google/Play/app store, but I wouldn’t be too sanguine about his chances of succeeding …


[1] – since the foreign workers program seems to be being used primarily to bring in workers for the oil and gas sector right now, do you think it would help if she offered to mount a production of “Grease”?

Disasters in BC

The auditor general has weighed in, and, surprise, surprise, we are not ready for an earthquake.

On the one hand, I’m not entirely sure that the auditor general completely understands disaster planning, and she hasn’t read Kenneth Myers and so doesn’t know that it can be counter-productive to produce plans for every single possibility.

On the other hand, I’m definitely with Vaugh Palmer in that we definitely need more public education.  We are seeing money diverted from disaster planning to other areas, regardless of a supposed five-fold increase in emergency budget.  In the past five years, the professional association has been defunded, training is very limited in local municipalities, and even recruitment and “thank you” events for volunteers have almost disappeared.  Emergency planning funds shouldn’t be used to pay for capital projects.

(And the province should have been prepared for an audit in this area, since they got a warning shot last year.)

So, once again, and even more importantly, I’d recommend you all get emergency training.  I’ve said it beforeI keep saying itI will keep on saying it.

(Stephen Hume agrees with me, although he doesn’t know the half of it. )

Enhanced Nigerian scam – linkedin style

Linkedin is a much better platform for Nigerian scammers: They now have my first and last name, information about me, etc. So they can craft the following letter (sent by this guy):

Hello Aviram Jenik,

I am Dr Sherif Akande, a citizen of Ghana, i work with Barclay’s Bank Ltd, Ghana. I have in my bank Existence of the Amount of money valued at $8.400,000.00, the big hurt Belongs to the customer, Peter B.Jenik, who Happen To Have The Same name as yours. The fund is now without any Claim Because, Peter B.Jenik, in a deadly earthquake in China in 2008. I want your cooperation so that bank will send you the fund as the beneficiary and located next of kin to the fund.

This transaction will be of a great mutual assistance to us. Send me your reply of interest so that i will give you the details. Strictly send it to my private email account {} or send me your email address to send you details of this transaction.

At the receipt of your reply, I will give you details of the transaction.I look forward to hear from you. I will send you a scan copy of the deposit certificate.

Send me an email to my private email account {}for more details of the transaction.

Best Regard’s
Dr Sherif Akande.
Here is my number +233548598269

CyberSec Tips: Follow the rules – and advice

A recent story (actually based on one from several years ago) has pointed out that, for years, the launch codes for nuclear missiles were all set to 00000000.  (Not quite true: a safety lock was set that way.)

Besides the thrill value of the headline, there is an important point buried in the story.  Security policies, rules, and procedures are usually developed for a reason.  In this case, given the importance of nuclear weapons, there is a very real risk from a disgruntled insider, or even simple error.  The safety lock was added to the system in order to reduce that risk.  And immediately circumvented by people who didn’t think it necessary.

I used to get asked, a lot, for help with malware infestations, by friends and family.  I don’t get asked much anymore.  I’ve given them simple advice on how to reduce the risk.  Some have taken that advice, and don;t get hit.  A large number of others don’t ask because they know I will ask if they’ve followed the advice, and they haven’t.

Security rules are usually developed for a reason, after a fair amount of thought.  This means you don’t have to know about security, you just have to follow the rules.  You may not know the reason, but the rules are actually there to keep you safe.  It’s a good idea to follow them.


(There is a second point to make here, addressed not to the general public but to the professional security crowd.  Put the thought in when you make the rules.  Don’t make stupid rules just for the sake of rules.  That encourages people to break the stupid rules.  And the necessity of breaking the stupid rules encourages people to break all the rules …)


In recent days there has been much interest in the “BadBIOS” infection being reported by Dragos Ruiu.  (The best overview I’ve seen has been from Naked Security.)  But to someone who has lived through several viral myths and legends, parts of it sound strange.

  • It is said to infect the low-level system firmware of your computer, so it can’t be removed or disabled simply by rebooting.

These things, of course, have been around for a while, so that isn’t necessarily wrong.  However, BIOS infectors never became a major vector.

  • It is said to include components that work at the operating system level, so it affects the high-level operation of your computer, too.
  • It is said to be multi-platform, affecting at least Windows, OS X, and OpenBSD systems.

This sounds bit odd, but we’ve had cross-platform stuff before.  But they never became major problems either.

  • It is said to prevent infected systems being booted from CD drives.

Possible: we’ve seen similar effects over the years, both intentionally and un.

  • It is said to spread itself to new victim computers using Software Defined Radio (SDR) program code, even with all wireless hardware removed.

OK, it’s dangerous to go out on a limb when you haven’t seen details and say something can’t happen, but I’m calling bullshit on this one.  Not that I don’t think someone couldn’t create a communications channel without the hardware: anything the hardware guys can do the software guys can emulate, and vice versa.  However, I can’t see getting an infection channel this way, at least without some kind of minimal infection first.  (It is, of course, possible that the person doing the analysis may have made a mistake in what they observed, or in the reporting of it.)

  • It is said to spread itself to new victim computers using the speakers on an infected device to talk to the microphone on an uninfected one.

As above.

  • It is said to infect simply by plugging in a USB key, with no other action required.

We’ve seen that before.

  • It is said to infect the firmware on USB sticks.

Well, a friend has built a device to blow off dangerous firmware on USB sticks, so I don’t see that this would present any problem.

  • It is said to render USB sticks unusable if they aren’t ejected cleanly; these sticks work properly again if inserted into an infected computer.

Reminds me somewhat of the old “fast infectors” of the early 90s.  They had unintended effects that actually made the infections easy to remove.

  • It is said to use TTF (font) files, apparently in large numbers, as a vector when spreading.

Don’t know details of the internals of TTF files, but they should certainly have enough space.

  • It is said to block access to Russian websites that deal with reflashing software.

Possible, and irrelevant unless we find out what is actually true.

  • It is said to render any hardware used in researching the threat useless for further testing.

Well, anything that gets reflashed is likely to become unreliable and untrustworthy …

  • It is said to have first been seen more than three years ago on a Macbook.

And it’s taken three years to get these details?  Or get a sample to competent researchers?  Or ask for help?  This I find most unbelievable.

In sum, then, I think this might be possible, but I strongly suspect that it is either a promotion for PacSec, or a promo for some presentation on social engineering.