Law

Law and legislation

Words to leak by …

The Department of Homeland Security has been forced to release a list of keywords and phrases it uses to monitor social networking sites and online media.  (Like this one?)

This wasn’t “smart.”  Obviously some “pork” barrel project dreamed up by the DHS “authorities” “team” (“Hail” to them!) who are now “sick”ly sorry they looked into “cloud” computing “response.”  They are going to learn more than they ever wanted to know about “exercise” fanatics going through the “drill.”

Hopefully this message won’t “spillover” and “crash” their “collapse”d parsing app, possibly “strain”ing a data “leak.”  You can probably “plot” the failures at the NSA as the terms “flood” in.  They should have asked us for “help,” or at least “aid.”

Excuse, me, according to the time on my “watch,” I have to leave off working on this message, “wave” bye-bye, and get some “gas” in the car, and then get a “Subway” for the “nuclear” family’s dinner.  Afterwards, we’re playing “Twister”!

(“Dedicated denial of service”?  Really?)

REVIEW: “Dark Market: CyberThieves, CyberCops, and You”, Misha Glenny

BKDRKMKT.RVW 20120201

“Dark Market: CyberThieves, CyberCops, and You”, Misha Glenny, 2011,
978-0-88784-239-9, C$29.95
%A   Misha Glenny
%C   Suite 801, 110 Spadina Ave, Toronto, ON Canada  M5V 2K4
%D   2011
%G   978-0-88784-239-9 0-88784-239-9
%I   House of Anansi Press Ltd.
%O   C$29.95 416-363-4343 fax 416-363-1017 www.anansi.ca
%O  http://www.amazon.com/exec/obidos/ASIN/0887842399/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/0887842399/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0887842399/robsladesin03-20
%O   Audience n Tech 1 Writing 2 (see revfaq.htm for explanation)
%P   296 p.
%T   “Dark Market: CyberThieves, CyberCops, and You”

There is no particular purpose stated for this book, other than the vague promise of the subtitle that this has something to do with bad guys and good guys in cyberspace.  In the prologue, Glenny admits that his “attempts to assess when an interviewee was lying, embellishing or fantasising and when an interviewee was earnestly telling the truth were only partially successful.”  Bear in mind that all good little blackhats know that, if you really want to get in, the easiest thing to attack is the person.  Social engineering (which is simply a fancy way of saying “lying”) is always the most effective tactic.

It’s hard to have confidence in the author’s assessment of security on the Internet when he knows so little of the technology.  A VPN (Virtual Private Network) is said to be a system whereby a group of computers share a single address.  That’s not a VPN (which is a system of network management, and possibly encryption): it’s a description of NAT (Network Address Translation).  True, a VPN can, and fairly often does, use NAT in its operations, but the carelessness is concerning.

This may seem to be pedantic, but it leads to other errors.  For example, Glenny asserts that running a VPN is very difficult, but that encryption is easy, since encryption software is available on the Internet.  While it is true that the software is available, that availability is only part of the battle.  As I keep pointing out to my students, for effective protection with encryption you need to agree on what key to use, and doing that negotiation is a non-trivial task.  Yes, there is asymmetric encryption, but that requires a public key infrastructure (PKI) which is an enormously difficult proposition to get right.  Of the two, I’d rather run a VPN any day.

It is, therefore, not particularly surprising that the author finds that the best way to describe the capabilities of one group of carders was to compare them to the fictional “hacking” crew from “The Girl with the Dragon Tattoo.”  The activities in the novel are not impossible, but the ability to perform them on demand is highly
unlikely.

This lack of background colours his ability to ascertain what is possible or not (in the technical areas), and what is likely (out of what he has been told).  Sticking strictly with media reports and indictment documents, Glenny does a good job, and those parts of the book are interesting and enjoyable.  The author does let his taste for mystery get the better of him: even the straight reportage parts of the book are often confusing in terms of who did what, and who actually is what.

Like Dan Verton (cf BKHCKDRY.RVW) and Suelette Dreyfus (cf. BKNDRGND.RVW) before him, Glenny is trying to give us the “inside story” of the blackhat community.  He should have read Taylor’s “Hackers” (cf BKHAKERS.RVW) first, to get a better idea of the territory.  He does a somewhat better job than Dreyfus and Verton did, since he is wise enough to seek out law enforcement accounts (possibly after reading Stiennon’s “Surviving Cyberwar,” cf. BKSRCYWR.RVW).

Overall, this work is a fairly reasonable updating of Levy’s “Hackers” (cf. BKHACKRS.RVW) of almost three decades ago.  The rise of the financial motivation and the specialization of modern fraudulent blackhat activity are well presented.  There is something of a holdover in still portraying these crooks as evil genii, but, in the main, it is a decent picture of reality, although it provides nothing new.

copyright, Robert M. Slade   2012    BKDRKMKT.RVW 20120201

C-30

C. S. Lewis wrote some pretty good sci-fi, some excellent kids books (which Disney managed to ruin), and my favourite satire on the commercialization of Christmas.  Most people, though, would know him as a writer on Christianity.  So I wonder if Stephen Harper and Vic Toews have ever read him.  One of the things he wrote was, “It would be better to live under robber barons than under omnipotent moral busybodies.”

Bill C-30 (sometimes known as the Investigating and Preventing Criminal Electronic Communications Act, sometimes known as the Protecting Children from Internet Predators Act, and sometimes just known as “the online spy bill”) is heading for Committee of the Whole.  This means that some aspects of it may change.  But it’ll have to change an awful lot before it becomes even remotely acceptable.

It’s got interesting provisions.  Apparently, as it stands, it doesn’t allow law enforcement to actually demand access to information without a warrant.  But it allows the to request a “voluntary” disclosure of information.  Up until, law enforcement could request voluntary disclosure, of course.  But then the ISP would refuse pretty much automatically, since to provide that information would breach PIPEDA.  So now that automatic protection seems to be lost.

(Speaking of PIPEDA, there is this guy who is being tracked by who-knows-who.  The tracking is being done by an American company, so they can’t be forced by Canadian authorities to say who planted the bug.  But the data is being passed by a Canadian company, Kore Wireless.  And, one would think, they are in breach of PIPEDA, since they are passing personal information to a jurisdiction [the United States] which basically has no legal privacy protection at all.)

It doesn’t have to be law enforcement, either.  The Minister would have the right to authorize anyone his (or her) little heart desires to request the information.

Then there is good old Section 14, which allows the government to make ISPs install any kind of surveillance equipment the government wants, impose confidentiality on anything (like telling people they are being surveilled), or impose any other operational requirements they want.

Now, our Minister of Public Safety (doesn’t that name just make you feel all warm and 1984ish?), Vic Toews, has been promoting the heck out of the bill, even though he actually doesn’t know what it says or what’s in it.  He does know that if you oppose C-30 you are on the side of child pornographers.  This has led a large number of Canadians to cry out #DontToewsMeBro and to suggest that it might be best to #TellVicEverythingRick Mercer, Canada’s answer to Jon Stewart and famous for his “rants,” has weighed in on the matter.

As far as Toews and friends are concerned, the information that they are after, your IP address and connections, are just like a phone book.  Right.  Well, a few years back Google made their “phone book” available.  Given the huge volume of information, even though it was anonymized, researchers were able to aggregate information, and determine locations, names, interests, political views, you name it.  Hey, Google themselves admit that they can tell how you’re feeling.

But, hey, maybe I’m biased.  Ask a lawyer.  Michael Geist knows about these things, and he’s concerned.  (Check out his notes on the new copyright bill, too.

The thing is, it’s not going to do what the government says it’s going to do.  This will not automatically stop child pornography, or terrorism, or online fraudsters.  Hard working, diligent law enforcement officers are going to do that.  There are a lot of those diligent law enforcement officers out there, and they are doing a sometimes amazing job.  And I’d like to help.  But providing this sort of unfiltered data dump for them isn’t going to help.  It’s going to hurt.  The really diligent ones are going to be crowded out by lazy yahoos who will want to waltz into ISP offices and demand data.  And then won’t be able to understand it.

How do I know this?  It’s simple.  Anyone who knows about the technology can tell you that this kind of access is 1) an invasion of privacy, and 2) not going to help.  But this government is going after it anyway.  In spite of the fact that the Minister responsible doesn’t know what is in the bill.  (Or so he says.)  Why is that?  Is it because they are wilfully evil?  (Oh, the temptation.)  Well, no.  These situations tend to be governed by Hanlon’s Rzor which, somewhat modified, states that you should never attribute to malicious intent, that which can adequately explained by assuming pure, blind, pig-ignorant stupidity.

QED.

REVIEW: “Liars and Outliers: Enabling the Trust that Society Needs to Thrive”, Bruce Schneier

BKLRSOTL.RVW   20120104

“Liars and Outliers: Enabling the Trust that Society Needs to Thrive”,
Bruce Schneier, 2012, 978-1-118-14330-8, U$24.95/C$29.95
%A   Bruce Schneier www.Schneier.com
%C   5353 Dundas Street West, 4th Floor, Etobicoke, ON   M9B 6H8
%D   2012
%G   978-1-118-14330-8 1-118-14330-2
%I   John Wiley & Sons, Inc.
%O   U$24.95/C$29.95 416-236-4433 fax: 416-236-4448 www.wiley.com
%O  http://www.amazon.com/exec/obidos/ASIN/1118143302/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/1118143302/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/1118143302/robsladesin03-20
%O   Audience n+ Tech 2 Writing 3 (see revfaq.htm for explanation)
%P   365 p.
%T   “Liars and Outliers: Enabling the Trust that Society Needs to
Thrive”

Chapter one is what would ordinarily constitute an introduction or preface to the book.  Schneier states that the book is about trust: the trust that we need to operate as a society.  In these terms, trust is the confidence we can have that other people will reliably behave in certain ways, and not in others.  In any group, there is a desire in having people cooperate and act in the interest of all the members of the group.  In all individuals, there is a possibility that they will defect and act against the interests of the group, either for their own competing interest, or simply in opposition to the group.  (The author notes that defection is not always negative: positive social change is generally driven by defectors.)  Actually, the text may be more about social engineering, because Schneier does a very comprehensive job of exploring how confident we can be about trust, and they ways we can increase (and sometimes inadvertantly decrease) that reliability.

Part I explores the background of trust, in both the hard and soft sciences.  Chapter two looks at biology and game theory for the basics.  Chapter three will be familiar to those who have studied sociobiology, or other evolutionary perspectives on behaviour.  A historical view of sociology and scaling makes up chapter four.  Chapter five returns to game theory to examine conflict and societal dilemmas.

Schneier says that part II develops a model of trust.  This may not be evident at a cursory reading: the model consists of moral pressures, reputational pressures, institutional pressures, and security systems, and the author is very careful to explain each part in chapters seven through ten: so careful that it is sometimes hard to follow the structure of the arguments.

Part III applies the model to the real world, examining competing interests, organizations, corporations, and institutions.  The relative utility of the four parts of the model is analyzed in respect to different scales (sizes and complexities) of society.  The author also notes, in a number of places, that distrust, and therefore excessive institutional pressures or security systems, is very expensive for individuals and society as a whole.

Part IV reviews the ways societal pressures fail, with particular emphasis on technology, and information technology.  Schneier discusses situations where carelessly chosen institutional pressures can create the opposite of the effect intended.

The author lists, and proposes, a number of additional models.  There are Ostrom’s rules for managing commons (a model for self-regulating societies), Dunbar’s numbers, and other existing structures.  But Schneier has also created a categorization of reasons for defection, a new set of security control types, a set of principles for designing effective societal pressures, and an array of the relation between these control types and his trust model.  Not all of them are perfect.  His list of control types has gaps and ambiguities (but then, so does the existing military/governmental catalogue).  In his figure of the feedback loops in societal pressures, it is difficult to find a distinction between “side effects” and “unintended consequences.”  However, despite minor problems, all of these paradigms can be useful in reviewing both the human factors in security systems, and in public policy.

Schneier writes as well as he always does, and his research is extensive.  In part one, possibly too extensive.  A great many studies and results are mentioned, but few are examined in any depth.  This does not help the central thrust of the book.  After all, eventually Schneier wants to talk about the technology of trust, what works, and what doesn’t.  In laying the basic foundation, the question of the far historical origin of altruism may be of academic philosophical interest, but that does not necessarily translate into an
understanding of current moral mechanisms.  It may be that God intended us to be altruistic, and therefore gave us an ethical code to shape our behaviour.  Or, it may be that random mutation produced entities that acted altruistically and more of them survived than did others, so the population created expectations and laws to encourage that behaviour, and God to explain and enforce it.  But trying to explore which of those (and many other variant) options might be right only muddies the understanding of what options actually help us form a secure society today.

Schneier has, as with “Beyond Fear” (cf. BKBYNDFR.RVW) and “Secrets and Lies” (cf. BKSECLIE.RVW), not only made a useful addition to the security literature, but created something of value to those involved with public policy, and a fascinating philosophical tome for the general public.  Security professionals can use a number of the models to assess controls in security systems, with a view to what will work, what won’t (and what areas are just too expensive to protect).  Public policy will benefit from examination of which formal structures are likely to have a desired effect.  (As I am finishing this review the debate over SOPA and PIPA is going on: measures unlikely to protect intellectual property in any meaningful way, and guaranteed to have enormous adverse effects.)  And Schneier has brought together a wealth of ideas and research in the fields of trust and society, with his usual clarity and readability.

copyright, Robert M. Slade   2011     BKLRSOTL.RVW   20120104

Publish and/or perish

A new study notes that “scholarly” academic journals are forcing the people who want to publish in them (the journals) to add useless citations to the published articles.  OK, this may sound like more academic infighting.  (Q: Why are academic fights so bitter? A: Because the stakes are so small.)  But it actually has some fairly important implications.  These journals are, in many eyes, the elite of the publishing world.  These articles are peer-reviewed, which means they are tested by other experts before they are even published.  Therefore, many assume that if you see it in one of these journals, it’s so.

(The system isn’t pefect.  Ralph Merkle couldn’t get his paper on asymmetric encryption published because a reviewer felt it “wasn’t interesting.”  The greatest advance in crypto in 4,000 years and it wasn’t interesting?)

These are, of course, the same journals that are lobbying to have their monopoly business protected by the “Research Works Act,” among other things.  (The “Resarch Works Act” is a whole different kettle of anti-[open access|public domain|open source] intellectual property irrationality.)

I was, initially, a bit surprised by the study on forced citations.  After all, these are, supposedly, the guardians of truth.  Yes, OK, that’s naive.  I’ve published in magazines myself.  Not the refereed journals, perhaps: I’m not important enough for that.  But I’ve been asked for articles by many periodicals.  They’ve had all kinds of demands.  The one that I find most consistently annoying is that I provide graphics and images.  I’m a resarcher, not a designer: I don’t do graphics.  But, I recall one time that I was asked to do an article on a subject dear to my heart.  Because I felt strongly about it, I put a lot of work into it.  I was even willing to give them some graphics.  And, in the end, they rejected it.

Not enough quotes from vendors.

This is, of course, the same motivation as the forced citations.  In any periodical, you make money by selling advertising.  In trade rags, the ease of selling advertsing to vendors is determined by how much space you’ve given them in the supposed editorial content.  In the academic journals, the advertising rates are determined by the number of citations to articles you’ve previously published.  Hence, in both cases, the companies with the advertising budgets get to determine what actually gets published.

(As long as we’ve here, I have one more story, somewhat loosely related to publishing, citation, open access, and intellectual property.  On another occasion, I was asked to do a major article cluster on the history of computer viruses.  This topic is very dear to my heart, and I put in lots of time, lots of work, and even lots of graphics.  This group of articles got turned down as well.  The reason given in that case was that they had used a Web-based plagiarism detector on the stuff, and found that it was probably based on materials already on the net.  Well, of course it was.  I wrote most of the stuff on that topic that is already on the Web …)

Corporate social media rules

An item for discussion:

I’ve see this stuff in some recent reports of lawsuits.  First people started using social media, for social things.  Then corps decided that socmed was a great way to spam people without being accused of spamming.  Then corps suddenly realized, to their horror, that, on socmed, people can talk back.  And maybe alert other people to the fact that you a) don’t fulfill on your promises, b) make lousy products, c) provide lousy service, and d) so on.

Gloria ran into this today and asked me about the legalities of it.  I imagine that it has all the legality of any waiver: you can’t sign away your rights, and a waiver has slightly less value than the paper it’s printed on (or, slightly more, if a fraudster can copy your signature off it  [Sorry, I’m a professional paranoid.  My brain just works that way.]).

Anyway, what she ran into today (a Facebook page that was offering to let you in on a draw if you “liked” them) (don’t worry, we’ve already discussed the security problems of “likes”):

“We’re honoured that you’re a fan of [us], and we look forward to hearing what you have to say. To ensure a positive online experience for the entire community, we may monitor and remove certain postings. “Be kind and have fun” is the short version of our rules. What follows is the longer version of rules for posts, communications and general behaviour on [our] Facebook page:”

[fairly standard “we’re nice people” marketing type bumpf – rms]

“The following should not be posted on [our] Facebook pages:”

Now, some of this is good:
“Unauthorized commercial communications (such as spam)
“Content meant to bully, intimidate or harass any user
“Content that is hateful, threatening, discriminatory, pornographic, or that
contains nudity or graphic or gratuitous violence
“Content that infringes or violates someone else’s rights or otherwise violates the law
“Personal, sensitive or financial information on this page (this includes but is not limited to email addresses, phone numbers, etc.)
“Unlawful or misleading posts”

Some of it is protecting their “brand”:
“Competitor material such as pictures, videos, or site links”

Some has to do with the fact that they are a franchise operation:
“Links to personal [agent] websites, or invitations from [agents] to connect with them privately”

But some it is limits freedom of expression:
“Unconstructive, negative or derogatory comments
“Repeat postings of unconstructive comments/statements”

And, of course, the kicker:
“[We] reserves the right to remove any postings deemed to be inappropriate or in violation of these rules.”

Now, it’s probably the case that they do have the right to manipulate the content on their site/page any way they want to.  But, how far can these “rules” go?

First big break-in of the year

Richard Stiennon writes:

I have only one security related prediction for 2012 and that is that we are in for a year that will make 2011 look tame in terms of major targeted attacks.

He gives the 2011 examples of the break-in to Sony playstation network and an attack on Stratfor (a defense intelligence organization). Here’s one from yesterday: A saudi attacker published the details of credit cards (and other personal information such as I.D numbers and address) for hundreds of thousands Israelis.

Going to be a fun year!

The political risks of a DDoS

In Korea, the ruling party performed a DDoS attack, and as result the chairman and most of its officials will resign. Most likely, it will be disbanded completely.
This is probably the most severe result of a cyber attack yet. Of course, the only reason they know who to blame, is because the guy responsible for the attack admitted guilt. DDoS is all fun and games until the guy you hired to do it spills the beans.