Words to leak by …

The Department of Homeland Security has been forced to release a list of keywords and phrases it uses to monitor social networking sites and online media.  (Like this one?)

This wasn’t “smart.”  Obviously some “pork” barrel project dreamed up by the DHS “authorities” “team” (“Hail” to them!) who are now “sick”ly sorry they looked into “cloud” computing “response.”  They are going to learn more than they ever wanted to know about “exercise” fanatics going through the “drill.”

Hopefully this message won’t “spillover” and “crash” their “collapse”d parsing app, possibly “strain”ing a data “leak.”  You can probably “plot” the failures at the NSA as the terms “flood” in.  They should have asked us for “help,” or at least “aid.”

Excuse, me, according to the time on my “watch,” I have to leave off working on this message, “wave” bye-bye, and get some “gas” in the car, and then get a “Subway” for the “nuclear” family’s dinner.  Afterwards, we’re playing “Twister”!

(“Dedicated denial of service”?  Really?)

Share

REVIEW: “Dark Market: CyberThieves, CyberCops, and You”, Misha Glenny

BKDRKMKT.RVW 20120201

“Dark Market: CyberThieves, CyberCops, and You”, Misha Glenny, 2011,
978-0-88784-239-9, C$29.95
%A   Misha Glenny
%C   Suite 801, 110 Spadina Ave, Toronto, ON Canada  M5V 2K4
%D   2011
%G   978-0-88784-239-9 0-88784-239-9
%I   House of Anansi Press Ltd.
%O   C$29.95 416-363-4343 fax 416-363-1017 www.anansi.ca
%O  http://www.amazon.com/exec/obidos/ASIN/0887842399/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/0887842399/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0887842399/robsladesin03-20
%O   Audience n Tech 1 Writing 2 (see revfaq.htm for explanation)
%P   296 p.
%T   “Dark Market: CyberThieves, CyberCops, and You”

There is no particular purpose stated for this book, other than the vague promise of the subtitle that this has something to do with bad guys and good guys in cyberspace.  In the prologue, Glenny admits that his “attempts to assess when an interviewee was lying, embellishing or fantasising and when an interviewee was earnestly telling the truth were only partially successful.”  Bear in mind that all good little blackhats know that, if you really want to get in, the easiest thing to attack is the person.  Social engineering (which is simply a fancy way of saying “lying”) is always the most effective tactic.

It’s hard to have confidence in the author’s assessment of security on the Internet when he knows so little of the technology.  A VPN (Virtual Private Network) is said to be a system whereby a group of computers share a single address.  That’s not a VPN (which is a system of network management, and possibly encryption): it’s a description of NAT (Network Address Translation).  True, a VPN can, and fairly often does, use NAT in its operations, but the carelessness is concerning.

This may seem to be pedantic, but it leads to other errors.  For example, Glenny asserts that running a VPN is very difficult, but that encryption is easy, since encryption software is available on the Internet.  While it is true that the software is available, that availability is only part of the battle.  As I keep pointing out to my students, for effective protection with encryption you need to agree on what key to use, and doing that negotiation is a non-trivial task.  Yes, there is asymmetric encryption, but that requires a public key infrastructure (PKI) which is an enormously difficult proposition to get right.  Of the two, I’d rather run a VPN any day.

It is, therefore, not particularly surprising that the author finds that the best way to describe the capabilities of one group of carders was to compare them to the fictional “hacking” crew from “The Girl with the Dragon Tattoo.”  The activities in the novel are not impossible, but the ability to perform them on demand is highly
unlikely.

This lack of background colours his ability to ascertain what is possible or not (in the technical areas), and what is likely (out of what he has been told).  Sticking strictly with media reports and indictment documents, Glenny does a good job, and those parts of the book are interesting and enjoyable.  The author does let his taste for mystery get the better of him: even the straight reportage parts of the book are often confusing in terms of who did what, and who actually is what.

Like Dan Verton (cf BKHCKDRY.RVW) and Suelette Dreyfus (cf. BKNDRGND.RVW) before him, Glenny is trying to give us the “inside story” of the blackhat community.  He should have read Taylor’s “Hackers” (cf BKHAKERS.RVW) first, to get a better idea of the territory.  He does a somewhat better job than Dreyfus and Verton did, since he is wise enough to seek out law enforcement accounts (possibly after reading Stiennon’s “Surviving Cyberwar,” cf. BKSRCYWR.RVW).

Overall, this work is a fairly reasonable updating of Levy’s “Hackers” (cf. BKHACKRS.RVW) of almost three decades ago.  The rise of the financial motivation and the specialization of modern fraudulent blackhat activity are well presented.  There is something of a holdover in still portraying these crooks as evil genii, but, in the main, it is a decent picture of reality, although it provides nothing new.

copyright, Robert M. Slade   2012    BKDRKMKT.RVW 20120201

Share

C-30

C. S. Lewis wrote some pretty good sci-fi, some excellent kids books (which Disney managed to ruin), and my favourite satire on the commercialization of Christmas.  Most people, though, would know him as a writer on Christianity.  So I wonder if Stephen Harper and Vic Toews have ever read him.  One of the things he wrote was, “It would be better to live under robber barons than under omnipotent moral busybodies.”

Bill C-30 (sometimes known as the Investigating and Preventing Criminal Electronic Communications Act, sometimes known as the Protecting Children from Internet Predators Act, and sometimes just known as “the online spy bill”) is heading for Committee of the Whole.  This means that some aspects of it may change.  But it’ll have to change an awful lot before it becomes even remotely acceptable.

It’s got interesting provisions.  Apparently, as it stands, it doesn’t allow law enforcement to actually demand access to information without a warrant.  But it allows the to request a “voluntary” disclosure of information.  Up until, law enforcement could request voluntary disclosure, of course.  But then the ISP would refuse pretty much automatically, since to provide that information would breach PIPEDA.  So now that automatic protection seems to be lost.

(Speaking of PIPEDA, there is this guy who is being tracked by who-knows-who.  The tracking is being done by an American company, so they can’t be forced by Canadian authorities to say who planted the bug.  But the data is being passed by a Canadian company, Kore Wireless.  And, one would think, they are in breach of PIPEDA, since they are passing personal information to a jurisdiction [the United States] which basically has no legal privacy protection at all.)

It doesn’t have to be law enforcement, either.  The Minister would have the right to authorize anyone his (or her) little heart desires to request the information.

Then there is good old Section 14, which allows the government to make ISPs install any kind of surveillance equipment the government wants, impose confidentiality on anything (like telling people they are being surveilled), or impose any other operational requirements they want.

Now, our Minister of Public Safety (doesn’t that name just make you feel all warm and 1984ish?), Vic Toews, has been promoting the heck out of the bill, even though he actually doesn’t know what it says or what’s in it.  He does know that if you oppose C-30 you are on the side of child pornographers.  This has led a large number of Canadians to cry out #DontToewsMeBro and to suggest that it might be best to #TellVicEverythingRick Mercer, Canada’s answer to Jon Stewart and famous for his “rants,” has weighed in on the matter.

As far as Toews and friends are concerned, the information that they are after, your IP address and connections, are just like a phone book.  Right.  Well, a few years back Google made their “phone book” available.  Given the huge volume of information, even though it was anonymized, researchers were able to aggregate information, and determine locations, names, interests, political views, you name it.  Hey, Google themselves admit that they can tell how you’re feeling.

But, hey, maybe I’m biased.  Ask a lawyer.  Michael Geist knows about these things, and he’s concerned.  (Check out his notes on the new copyright bill, too.

The thing is, it’s not going to do what the government says it’s going to do.  This will not automatically stop child pornography, or terrorism, or online fraudsters.  Hard working, diligent law enforcement officers are going to do that.  There are a lot of those diligent law enforcement officers out there, and they are doing a sometimes amazing job.  And I’d like to help.  But providing this sort of unfiltered data dump for them isn’t going to help.  It’s going to hurt.  The really diligent ones are going to be crowded out by lazy yahoos who will want to waltz into ISP offices and demand data.  And then won’t be able to understand it.

How do I know this?  It’s simple.  Anyone who knows about the technology can tell you that this kind of access is 1) an invasion of privacy, and 2) not going to help.  But this government is going after it anyway.  In spite of the fact that the Minister responsible doesn’t know what is in the bill.  (Or so he says.)  Why is that?  Is it because they are wilfully evil?  (Oh, the temptation.)  Well, no.  These situations tend to be governed by Hanlon’s Rzor which, somewhat modified, states that you should never attribute to malicious intent, that which can adequately explained by assuming pure, blind, pig-ignorant stupidity.

QED.

Share

REVIEW: “Liars and Outliers: Enabling the Trust that Society Needs to Thrive”, Bruce Schneier

BKLRSOTL.RVW   20120104

“Liars and Outliers: Enabling the Trust that Society Needs to Thrive”,
Bruce Schneier, 2012, 978-1-118-14330-8, U$24.95/C$29.95
%A   Bruce Schneier www.Schneier.com
%C   5353 Dundas Street West, 4th Floor, Etobicoke, ON   M9B 6H8
%D   2012
%G   978-1-118-14330-8 1-118-14330-2
%I   John Wiley & Sons, Inc.
%O   U$24.95/C$29.95 416-236-4433 fax: 416-236-4448 www.wiley.com
%O  http://www.amazon.com/exec/obidos/ASIN/1118143302/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/1118143302/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/1118143302/robsladesin03-20
%O   Audience n+ Tech 2 Writing 3 (see revfaq.htm for explanation)
%P   365 p.
%T   “Liars and Outliers: Enabling the Trust that Society Needs to
Thrive”

Chapter one is what would ordinarily constitute an introduction or preface to the book.  Schneier states that the book is about trust: the trust that we need to operate as a society.  In these terms, trust is the confidence we can have that other people will reliably behave in certain ways, and not in others.  In any group, there is a desire in having people cooperate and act in the interest of all the members of the group.  In all individuals, there is a possibility that they will defect and act against the interests of the group, either for their own competing interest, or simply in opposition to the group.  (The author notes that defection is not always negative: positive social change is generally driven by defectors.)  Actually, the text may be more about social engineering, because Schneier does a very comprehensive job of exploring how confident we can be about trust, and they ways we can increase (and sometimes inadvertantly decrease) that reliability.

Part I explores the background of trust, in both the hard and soft sciences.  Chapter two looks at biology and game theory for the basics.  Chapter three will be familiar to those who have studied sociobiology, or other evolutionary perspectives on behaviour.  A historical view of sociology and scaling makes up chapter four.  Chapter five returns to game theory to examine conflict and societal dilemmas.

Schneier says that part II develops a model of trust.  This may not be evident at a cursory reading: the model consists of moral pressures, reputational pressures, institutional pressures, and security systems, and the author is very careful to explain each part in chapters seven through ten: so careful that it is sometimes hard to follow the structure of the arguments.

Part III applies the model to the real world, examining competing interests, organizations, corporations, and institutions.  The relative utility of the four parts of the model is analyzed in respect to different scales (sizes and complexities) of society.  The author also notes, in a number of places, that distrust, and therefore excessive institutional pressures or security systems, is very expensive for individuals and society as a whole.

Part IV reviews the ways societal pressures fail, with particular emphasis on technology, and information technology.  Schneier discusses situations where carelessly chosen institutional pressures can create the opposite of the effect intended.

The author lists, and proposes, a number of additional models.  There are Ostrom’s rules for managing commons (a model for self-regulating societies), Dunbar’s numbers, and other existing structures.  But Schneier has also created a categorization of reasons for defection, a new set of security control types, a set of principles for designing effective societal pressures, and an array of the relation between these control types and his trust model.  Not all of them are perfect.  His list of control types has gaps and ambiguities (but then, so does the existing military/governmental catalogue).  In his figure of the feedback loops in societal pressures, it is difficult to find a distinction between “side effects” and “unintended consequences.”  However, despite minor problems, all of these paradigms can be useful in reviewing both the human factors in security systems, and in public policy.

Schneier writes as well as he always does, and his research is extensive.  In part one, possibly too extensive.  A great many studies and results are mentioned, but few are examined in any depth.  This does not help the central thrust of the book.  After all, eventually Schneier wants to talk about the technology of trust, what works, and what doesn’t.  In laying the basic foundation, the question of the far historical origin of altruism may be of academic philosophical interest, but that does not necessarily translate into an
understanding of current moral mechanisms.  It may be that God intended us to be altruistic, and therefore gave us an ethical code to shape our behaviour.  Or, it may be that random mutation produced entities that acted altruistically and more of them survived than did others, so the population created expectations and laws to encourage that behaviour, and God to explain and enforce it.  But trying to explore which of those (and many other variant) options might be right only muddies the understanding of what options actually help us form a secure society today.

Schneier has, as with “Beyond Fear” (cf. BKBYNDFR.RVW) and “Secrets and Lies” (cf. BKSECLIE.RVW), not only made a useful addition to the security literature, but created something of value to those involved with public policy, and a fascinating philosophical tome for the general public.  Security professionals can use a number of the models to assess controls in security systems, with a view to what will work, what won’t (and what areas are just too expensive to protect).  Public policy will benefit from examination of which formal structures are likely to have a desired effect.  (As I am finishing this review the debate over SOPA and PIPA is going on: measures unlikely to protect intellectual property in any meaningful way, and guaranteed to have enormous adverse effects.)  And Schneier has brought together a wealth of ideas and research in the fields of trust and society, with his usual clarity and readability.

copyright, Robert M. Slade   2011     BKLRSOTL.RVW   20120104

Share

Publish and/or perish

A new study notes that “scholarly” academic journals are forcing the people who want to publish in them (the journals) to add useless citations to the published articles.  OK, this may sound like more academic infighting.  (Q: Why are academic fights so bitter? A: Because the stakes are so small.)  But it actually has some fairly important implications.  These journals are, in many eyes, the elite of the publishing world.  These articles are peer-reviewed, which means they are tested by other experts before they are even published.  Therefore, many assume that if you see it in one of these journals, it’s so.

(The system isn’t pefect.  Ralph Merkle couldn’t get his paper on asymmetric encryption published because a reviewer felt it “wasn’t interesting.”  The greatest advance in crypto in 4,000 years and it wasn’t interesting?)

These are, of course, the same journals that are lobbying to have their monopoly business protected by the “Research Works Act,” among other things.  (The “Resarch Works Act” is a whole different kettle of anti-[open access|public domain|open source] intellectual property irrationality.)

I was, initially, a bit surprised by the study on forced citations.  After all, these are, supposedly, the guardians of truth.  Yes, OK, that’s naive.  I’ve published in magazines myself.  Not the refereed journals, perhaps: I’m not important enough for that.  But I’ve been asked for articles by many periodicals.  They’ve had all kinds of demands.  The one that I find most consistently annoying is that I provide graphics and images.  I’m a resarcher, not a designer: I don’t do graphics.  But, I recall one time that I was asked to do an article on a subject dear to my heart.  Because I felt strongly about it, I put a lot of work into it.  I was even willing to give them some graphics.  And, in the end, they rejected it.

Not enough quotes from vendors.

This is, of course, the same motivation as the forced citations.  In any periodical, you make money by selling advertising.  In trade rags, the ease of selling advertsing to vendors is determined by how much space you’ve given them in the supposed editorial content.  In the academic journals, the advertising rates are determined by the number of citations to articles you’ve previously published.  Hence, in both cases, the companies with the advertising budgets get to determine what actually gets published.

(As long as we’ve here, I have one more story, somewhat loosely related to publishing, citation, open access, and intellectual property.  On another occasion, I was asked to do a major article cluster on the history of computer viruses.  This topic is very dear to my heart, and I put in lots of time, lots of work, and even lots of graphics.  This group of articles got turned down as well.  The reason given in that case was that they had used a Web-based plagiarism detector on the stuff, and found that it was probably based on materials already on the net.  Well, of course it was.  I wrote most of the stuff on that topic that is already on the Web …)

Share

Corporate social media rules

An item for discussion:

I’ve see this stuff in some recent reports of lawsuits.  First people started using social media, for social things.  Then corps decided that socmed was a great way to spam people without being accused of spamming.  Then corps suddenly realized, to their horror, that, on socmed, people can talk back.  And maybe alert other people to the fact that you a) don’t fulfill on your promises, b) make lousy products, c) provide lousy service, and d) so on.

Gloria ran into this today and asked me about the legalities of it.  I imagine that it has all the legality of any waiver: you can’t sign away your rights, and a waiver has slightly less value than the paper it’s printed on (or, slightly more, if a fraudster can copy your signature off it  [Sorry, I'm a professional paranoid.  My brain just works that way.]).

Anyway, what she ran into today (a Facebook page that was offering to let you in on a draw if you “liked” them) (don’t worry, we’ve already discussed the security problems of “likes”):

“We’re honoured that you’re a fan of [us], and we look forward to hearing what you have to say. To ensure a positive online experience for the entire community, we may monitor and remove certain postings. “Be kind and have fun” is the short version of our rules. What follows is the longer version of rules for posts, communications and general behaviour on [our] Facebook page:”

[fairly standard "we're nice people" marketing type bumpf - rms]

“The following should not be posted on [our] Facebook pages:”

Now, some of this is good:
“Unauthorized commercial communications (such as spam)
“Content meant to bully, intimidate or harass any user
“Content that is hateful, threatening, discriminatory, pornographic, or that
contains nudity or graphic or gratuitous violence
“Content that infringes or violates someone else’s rights or otherwise violates the law
“Personal, sensitive or financial information on this page (this includes but is not limited to email addresses, phone numbers, etc.)
“Unlawful or misleading posts”

Some of it is protecting their “brand”:
“Competitor material such as pictures, videos, or site links”

Some has to do with the fact that they are a franchise operation:
“Links to personal [agent] websites, or invitations from [agents] to connect with them privately”

But some it is limits freedom of expression:
“Unconstructive, negative or derogatory comments
“Repeat postings of unconstructive comments/statements”

And, of course, the kicker:
“[We] reserves the right to remove any postings deemed to be inappropriate or in violation of these rules.”

Now, it’s probably the case that they do have the right to manipulate the content on their site/page any way they want to.  But, how far can these “rules” go?

Share

First big break-in of the year

Richard Stiennon writes:

I have only one security related prediction for 2012 and that is that we are in for a year that will make 2011 look tame in terms of major targeted attacks.

He gives the 2011 examples of the break-in to Sony playstation network and an attack on Stratfor (a defense intelligence organization). Here’s one from yesterday: A saudi attacker published the details of credit cards (and other personal information such as I.D numbers and address) for hundreds of thousands Israelis.

Going to be a fun year!

Share

The political risks of a DDoS

In Korea, the ruling party performed a DDoS attack, and as result the chairman and most of its officials will resign. Most likely, it will be disbanded completely.
This is probably the most severe result of a cyber attack yet. Of course, the only reason they know who to blame, is because the guy responsible for the attack admitted guilt. DDoS is all fun and games until the guy you hired to do it spills the beans.

Share

Fake Online Reviews

We’ve had means of expressing our opinions on various things for a long time.  Amazon has had reviews of the books pretty much since the beginning.  But how do we know that the reviews are real?  Virus writers took the opportunity presented by Amazon to trash my books when they were published.  (Even though they used different names, it only took a very simple form of forensic linguistics to figure out the identities.)

More recently, review spam has become more important, since many people are relying on the online reviews when buying items or booking services.  A number of “companies” have determined that it is more cost effective to have bots or other entities flood the review systems with fake positive reviews than it is to make quality products or services.  So, some nice people from Cornell university produced and tested some software to determine the fakes.

Note that, from these slides, there is not a lot of detail about exactly how they determine the fakes.  However, there is enough to indicate that sophisticated algorithms are less accurate than some fairly simple metrics.  When I teach about software forensics (aspects of which are similar to forensic lingusitics, or stylistic forensics), this seems counterintuitive and surprises a lot of students.  Generally they object that, if you know about the metircs, you should be able to avoid them.  In practice, this doesn’t seem to be the case.  Simple metrics do seem to be very effective in both forensic linguistics, and in software forensics.

Share

REVIEW: “Inside Cyber Warfare”, Jeffrey Carr

BKCYWRFR.RVW   20101204

“Inside Cyber Warfare”, Jeffrey Carr, 2010, 978-0-596-80215-8,
U$39.99/C$49.99
%A   Jeffrey Carr greylogic.us
%C   103 Morris Street, Suite A, Sebastopol, CA   95472
%D   2010
%G   978-0-596-80215-8 0-596-80215-3
%I   O’Reilly & Associates, Inc.
%O   U$39.99/C$49.99 800-998-9938 fax: 707-829-0104 nuts@ora.com
%O  http://www.amazon.com/exec/obidos/ASIN/0596802153/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/0596802153/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0596802153/robsladesin03-20
%O   Audience n Tech 1 Writing 2 (see revfaq.htm for explanation)
%P   212 p.
%T   “Inside Cyber Warfare: Mapping the Cyber Underworld”

The preface states that this text is an attempt to cover the very broad topic of cyber warfare with enough depth to be interesting without being technically challenging for the reader.

Chapter one provides examples of cyber attacks (mostly DDoS [Distributed Denial of Service]), and speculations about future offensives.  More detailed stories are given in chapter two, although the reason for the title of “Rise of the Non-State Hacker” isn’t really clear.  The legal status of cyber warfare, in chapter three, deals primarily with disagreements about military treaties.  A guest chapter (four) gives a solid argument for the use of “active defence” (striking back at an attacker) in cyber attacks perceived to be acts of war, based on international law in regard to warfare.  The author of the book is the founder of Project Grey Goose, and chapter five talks briefly about some of the events PGG investigated, using them to illustrate aspects of the intelligence component of cyber warfare (and noting some policy weaknesses, such as the difficulties of obtaining the services of US citizens of foreign birth).  The social Web is examined in chapter six, noting relative usage in Russia, China, and the middle east, along with use and misuse by military personnel.  (The Croll social engineering attack, and Russian scripted attack tools, are also detailed.)  Ownership links, and domain registrations, are examined in chapter seven, although in a restricted scope.  Some structures of systems supporting organized crime online are noted in chapter eight.  Chapter nine provides a limited look at the sources of information used to determine who might be behind an attack.  A grab bag of aspects of malware and social networks is compiled to form chapter ten.  Chapter eleven lists position papers on the use of cyber warfare from various military services.  Chapter twelve is another guest article, looking at options for early warning systems to detect a cyber attack.  A host of guest opinions on cyber warfare are presented in chapter thirteen.

Carr is obviously, and probably legitimately, concerned that he not disclose information of a sensitive nature that is detrimental to the operations of the people with whom he works.  (Somewhat ironically, I reviewed this work while the Wikileaks furor over diplomatic cables was being discussed.)  However, he appears to have gone too far.  The result is uninteresting for anyone who has any background in cybercrime or related areas.  Those who have little to no exposure to security discussions on this scale may find it surprising, but professionals will have little to learn, here.

copyright, Robert M. Slade   2010     BKCYWRFR.RVW   20101204

Share

National Strategy for Trusted Identities in Cyberspace

There is no possible way this could potentially go wrong, right?

Doesn’t the phrase “Identity Ecosystem” make you feel all warm and “green”?

It’s a public/private partnership, right?  So there is no possibility of some large corporation taking over the process and imposing *their* management ideas on it?  Like, say, trying to re-introduce the TCPI?

And there couldn’t possibly be any problem that an identity management system is being run out of the US, which has no privacy legislation?

The fact that any PKI has to be complete, and locked down, couldn’t affect the outcome, could it?

There isn’t any possible need for anyone (who wasn’t a vile criminal) to be anonymous, is there?

Share

Examining malware will be illegal in Canada

We’ve got a new law coming up in Canada: C-32, otherwise known as DMCA-lite.

Lemme quote you a section:

29.22 (1) It is not an infringement of copyright for an individual to reproduce a work or other subject-matter or any substantial part of a work or other subject-matter if
[...]
(c) the individual, in order to make the reproduction, did not circumvent, as defined in section 41, a technological protection measure, as defined in that section, or cause one to be circumvented.

Now, of course, if you want to examine a virus, or other malware, you have to make a copy, right?  So, if the virus writer has obfuscated the code, say by doing a little simple encryption, obviously he was trying to use a “technological protection measure, as defined in that section,” right?  So, decrypting the file is illegal.

Of course, it’s been illegal in the US for some years, now …

 
 
 
Share

Miranda minged?

I came across a very interesting article today.

It relates to the Miranda decision and warning.  Although this is American case law everybody knows about it, since it is the basis of the warning, on every cop show and movie, that the suspect has “the right to remain silent” etc.

This comes from a decision in 1966 that police must ensure a suspect understands his rights (not to incriminate himself) and waives them only “knowingly and intelligently.”

Now comes a case where a suspect was warned, and was then questioned for nearly three hours, during which time he said almost nothing. A detective then began asking the suspect about his religious beliefs: “Do you pray to God to forgive you for shooting that boy down?”  The suspect said, “Yes,” but refused to make any further confession. The prosecution introduced the statement as evidence, and a jury convicted.

The case was appealed and went to the US Supreme Court.

Four justices held that allowing the statement turns Miranda upside down and that criminal suspects must now unambiguously invoke their right to remain silent—which, counterintuitively, requires them to speak.

However, five justices held that after giving a Miranda warning, police may interrogate a suspect who has neither invoked nor waived his rights.

So, I guess the right not to incriminate, in the US, is now opt-in only.

Share

Privacy via lawsuit (vs security)

Interesting story about collecting data from Facebook.  I wonder if he would have had the same trouble if he had written the utility as a Facebook app, since apps are able to access all data from any user that runs them.  Maybe he could talk to the Farmville people, and collect everthing on pretty much every Facebook user.

All kinds of intriguing questions arise:

Has Facebook threatened to sue Google?  If they did, who has the bigger legal budget?

With all the embarrassing leaks, why doesn’t Facebook simply do some decent security, and set proper permissions?  (Oh, sorry.  I guess that’s a pretty stupid question.)

Does the legal concept of “community standards” apply to assumed technical standards such as robots.txt?  If nobody tests it in court, does any large corporation with a large legal budget get to rewrite the RFCs?

If you don’t get noticed, is it OK?  Does this mean that the blackhats, who try hard to stay undetected, are legally OK, and it’s only people who are working for the common good who are in trouble?

Share

Wikipedia as IP theft enabler?

I am not a huge fan of Wikipedia, in terms of tech accuracy, as I’ve noted before.  For a quick idea of a new term it’s great: beyond that, watch out.

However, Charles Muller has pointed out something I hadn’t considered:  that, given it’s popularity, Wikipedia is a prime vector for losing control of your intellectual property.  As one who has had masses of work ripped off by others, I have to be sympathetic to his argument.  He’s got a couple more good points in this quick little piece.

Share

Security Seal company sued by FTC

Lets start with the proper disclosure; we provide a Web Site Security Seal service which competes with ControlScan’s. That said, I’m not about to bash ControlScan but rather the poor practices of security seal companies giving out seals to whoever pays them without the proper security checks.

Some background: The FTC sued ControlScan for $750,000 for giving out security seals while not really checking the security of the web sites. This lawsuit and its verdict are good news: It means that services that give out seals need to be responsible for their actions; no more “scanless PCI” badges: if you give out a seal (and I’m looking at all you large domain resellers) that needs to stand for something – when customers see a seal that says “secure site” they need to know the site is secure.

Before you take out the pitchforks, sure – there is no way to verify with 100% certainty that the web site is “secure”. But vulnerability scanning is at a stage today where you can run automated scans and make sure the web site is “secure enough” – meaning it does not have any known vulnerabilities, doesn’t suffer from SQL injections or cross site scripting. If there is a zero day vulnerability in apache, I doubt it will be used against an e-commerce site – it is more likely to be used against a bank or the government. Fact is, over 90% of successful attacks use known vulnerabilities that would have been detected by any competent scanner. If the site is properly scanned and no vulnerabilities are found, this is probably as good as it’s ever going to get; and is definitely better than the chances of your credit card being stolen at a brick-and-mortar store.

What will happen with ControlScan is not really important. What’s important is that security seal providers will now have to stand behind their claims – the fact that the FCC went after a case like this, which is normally way below their threshold, probably means that someone is applying pressure on them; hopefully that will help clean up the act of some online scanning vendors.

Note: Complaint, Exhibits and final judgment here.

Share