Modern tech and news

About an hour ago, we started to be very annoyed by a helicopter circling overhead.  It was starting to get dark, and, when I saw it, it didn’t have anything particular in the way of searchlights on.

So, I got onto Twitter and started looking up items.  It was just after peak for rush hour, so I checked http://twitter.com/AM730Traffic  They didn’t have anything showing in our area, so it wasn’t their chopper.

I “follow” a number of news media, some in the local area.  Didn’t take too long before I hit http://twitter.com/ctvbcbreaking/status/32975300048461824  (It must be their helicopter.  They got three usable pictures, and kept the thing up there for over an hour.  I guess it’s a slow news day, locally.)  Since the murder is nearby, we recognized the location.  In fact, from the pattern of identifiable stones, I was able to pinpoint the location as http://is.gd/neJzfP  It’s about a block from our church.  (The youth group is meeting tonight.)  Subsequently, there were other reports from other sources.

(Like http://bit.ly/f3wVVX.  Yeah, you could probably say that this is suspicious.)

Share

Is SetFsb a Trojan?

This was sent to me by a friend who wanted to stay anonymous:

There’s a utility called SetFSB which tweaks the clock speed for overclocking stuff.
It was written in Japan, and is used for many years already.
Recently it came to me that I can speed up my old machine by 25% so I dl’ed it as well,
however, when running, I discovered that upon termination, the .exe creates 2 files,
1 batch file and 1 executable.
The batch file is being spawned, and starts a loop trying to delete the original executable, and continues indefinitely until it’s deleted. after that it will rename the new .exe to the be the same name as the old one.
Now, isn’t that suspicious?
I’ve tried googling it, and just found 1 reference in PCTool’s ThreatFire, but the shmucks just got the threat and couldn’t see the .exe and .bat, so they just decided it’s a false alarm and whitelisted the utility.
I thought it would be a good idea to contact the author, give him a chance to explain, and this is message train, which I find very funny:

there’s a uility called SetFSB which tweeks the clock speed for overclocking stuff.
It was written by some Jap, and is used for many years already.
Recently it came to me that I can speed up my old machine by 25% so I dl’ed it as well,
however, when running, I discovered that upon termination, the .exe creates 2 files,
1 batch file and 1 executable,
the batch file is being spawned, and starts a loop trying to delete the original executable, and continues indefinitely until it’s deleted. after that it will rename the new .exe to the be the same name as the old one.
Now, isn’t that suspicious?
I’ve tried googling it, and just found 1 reference in PCTool’s ThreatFire, but the shmucks just got the threat and couldn’t see the .exe and .bat, so they just decided it’s a false alaram and whitelisted the utility.
I thought it would be a good idea to contact the author, give him a chance to explain, and this is message train, which I find very funny:

ME>>>

Dear Mr.

Why after exiting SetFsb, it will create a .bat and new .exe
the .bat will loop to try delete the old .exe, and rename the new .exe to old .exe ?

Thanks!

HIM>>>

Hi,

Yes,

abo

ME>>>

Hello.

Yes… good…

but WHY???
is it a VIRUS?

thanks!

HIM>>> (here comes the good part :) )

I do not have a lot of free time too much.
Why do you think that i support you free of charge?

ME>>>

to make viruses?

HIM>>> (this is the original font color and size he used!!!)

I do not have a lot of free time too much!

ME>>> (trying to hack his japanese moralOS v0.99)

Please, dear Abo,

You must understand. People start to be VERY worried about your software,
because it behave like a virus.
If you will not give a good explanation to WHY it behave like this,
then people will stop using it, and stop trusting you forever.
Then your name will become bad, and you will have a lot of shame.
I only try to help you.

I hope you understand!

HIM>>>

It is unnecessary. Please do not use SetFSB if you are worried.

Personally, I’m not sure who’s more weird: my friend, overclocking his computer in 2011, or the Japanese programmer not willing to explain if his downloadble program is a Trojan or not.

Share

The List Of A 100 Million Facebook Usernames.

By now you’ve probably all heard about the security researcher Ron Bowes, who wrote a script to grab the list of usernames from Facebook’s public directly. You probably also know that the torrent containing all these unique usernames is available as a torrent to download.

You may not know though that at present, on just one torrent site there are currently 4248 people who have downloaded this list, and that there’s a further 8141 currently downloading this list, that’s a hell of a lot of people that are interested in complete strangers personal information and lives.

Let me just set the record straight here as there are quite a few rumors on the Internet at the moment, this was NOT a hack people. The information is publicly available, via Facebook’s directory page. Some say that the users are to blame for not setting their privacy settings securely, others say that Facebook’s convoluted way of implementing user security settings is too complicated for most common users. Me, personally, I’m a member of the latter camp, security settings should be easy for users to apply, not difficult, a simple “Security Yes/No” would be sufficient for most users.

The social engineering possibilities that you could use this list for are just amazing, and you never know when it may come in handy, or is that just me?
Anyway, what’s done is done now.

Oh yeah, I almost forgot, if you want the torrent, well, that can be found right about here, here, or on pretty much any torrent site at the moment, please remember though, if you do download it………..please seed.

Share

Differing takes on privacy

UAE says privacy is a security risk.

US says openness is a security risk.

Share

Reflections on Trusting Trust goes hardware

A recent Scientific American article does point out that is is getting increasingly difficult to keep our Trusted Computing Base sufficiently small.

For further information on this scenario, see: http://www.imdb.com/title/tt0436339/  [1]

We actually discussed this in the early days of virus research, and sporadically since.  The random aspect (see Dell problems with bad chips) (the stories about malware on the boards is overblown, since the malware was simply stored in unused memory, rather than being in the BIOS or other boot ROM) is definitely a problem, but a deliberate attack is problematic.  The issue lies with hundreds of thousands of hobbyists (as well as some of the hackers) who poke and prod at everything.  True, the chance of discovering the attack is random, but so is the chance of keeping the attack undetected.  It isn’t something that an attacker could rely upon.

Yes, these days there are thousands of components, being manufactured by hundreds of vendors.  However, note various factors that need to be considered.

First of all, somebody has to make it.  Most major chips, like CPUs, are a combined effort.  Nobody would be able to make and manufacture a major chip all by themselves.  And, in these days of tight margins and using every available scrap of chip “real estate,” someone would be bound to notice a section of the chip labeled “this space intentionally left blank.”  The more people who are involved, the more likely someone is going to spill the beans, at the very least about an anomaly on the chip, whether or not they knew what it did.  (Once the word is out that there is an anomaly, the lifespan of that secret is probably about three weeks.)

Secondly, there is the issue of the payload.  What can you make it do?  Remember, we are talking components, here.  This means that, in order to make it do anything, you are generally going to have to rely on whatever else is in the device or system in which your chip has been embedded.  You cannot assume that you will have access to communications, memory, disk space, or pretty much anything else, unless you are on the CPU.  Even if you are on the CPU, you are going to be limited.  Do you know what you are?  Are you a computer? Smartphone?  iPod?  (If the last, you are out of luck, unless you want to try and drive the user slowly insane by refusing to play anything except Barry Manilow.)  If you are a computer, do you know what operating system you are running?  Do you know the format of any disk connected to you?  The more you have to know how to deal with, the more programming has to be built into you, and remember that real estate limitation.  Even if all you are going to do is shut down, you have to have access to communications, and you have to a) be able to watch all the traffic, and b) watch all the traffic, without degrading performance while doing so.  (OK, true, it could just be a timer.  That doesn’t allow the attacker a lot of control.)

Next, you have to get people to use your chips.  That means that your chips have to be as cheap as, or cheaper than, the competition.  And remember, you have to use up chip real estate in order to have your payload on the chip.  That means that, for every 1% of chip space you use up for your programming, you lose 1% of manufacturing capacity.  So you have to have deep pockets to fund this.  Your chip also has to be at least as capable as the competition.  It also has to be as reliable as the competition.  You have to test that the payload you’ve put in place does not adversely affect performance, until you tell it to.  And you have to test it in a variety of situations and applications.  All the while making sure nobody finds out your little secret.

Next, you have to trigger your attack.  The trigger can’t be something that could just happen randomly.  And remember, traffic on the Internet, particularly with people streaming videos out there, can be pretty random.  Also remember that there are hundreds of thousands of kids out there with nothing better to do than try to use their computers, smartphones, music players, radio controlled cars, and blenders in exactly the way they aren’t supposed to.  And several thousand who, as soon as something odd happens, start trying to figure out why.

Bad hardware definitely is a threat.  But the largest part of that threat is simply the fact that cheap manufacturers are taking shortcuts and building unreliable components.  If I was an attacker, I would definitely be able to find easier ways to mess up the infrastructure than by trying to create attack chips.

[1] Get it some night when you can borrow it, for free, from your local library DVD collection.  On an evening when you don’t want to think too much.  Or at all.  WARNING: contains jokes that six year olds, and most guys, find funny.

Share

REVIEW: “The Design of Rijndael”, Joan Daemen/Vincent Rijmen

BKDRJNDL.RVW   20091129

“The Design of Rijndael”, Joan Daemen/Vincent Rijmen, 2002, 3-540-42580-2
%A   Joan Daemen
%A   Vincent Rijmen
%C   233 Spring St., New York, NY   10013
%D   2002
%G   3-540-42580-2
%I   Springer-Verlag
%O   212-460-1500 800-777-4643 service-ny@springer-sbm.com
%O  http://www.amazon.com/exec/obidos/ASIN/3540425802/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/3540425802/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/3540425802/robsladesin03-20
%O   Audience s- Tech 3 Writing 1 (see revfaq.htm for explanation)
%P   238 p.
%T   “The Design of Rijndael: AES – The Advanced Encryption Standard”

This book, written by the authors of the Rijndael encryption algorithm, (the engine underlying the Advanced Encryption Standard) explains how Rijndael works, discusses some implementation factors, and presents the approach to its design.  Daemen and Rijmen note the linear and differential cryptanalytic attacks to which DES (the Data Encryption Standard) was subject, the design strategy that resulted from their analysis, the possibilities of reduce round attacks, and the details of related ciphers.

Chapter one is a history of the AES assessment and decision process.  It is interesting to note the requirements specified, particularly the fact that AES was intended to protect “sensitive but unclassified” material.  Background in regard to mathematical and block cipher concepts is given in chapter two.  The specifications of Rijndael sub-functions and rounds are detailed in chapter three.  Chapter four notes implementation considerations in small platforms and dedicated hardware.  The design philosophy underlying the work is outlined in chapter five: much of it concentrates on simplicity and symmetry.
Differential and linear cryptanalysis mounted against DES is examined in chapter six.  Chapter seven reviews the use of correlation matrices in cryptanalysis.  If differences between pairs of plaintext can be calculated as they propagate through the boolean functions used for intermediate and resultant ciphertext, then chapter eight shows how this can be used as the basis of differential cryptanalysis.  Using the concepts from these two chapters, chapter nine examines how the wide trail design diffuses cipher operations and data to prevent strong linear correlations or differential propagation.  There is also formal proof of Rijndael’s resistant construction.  Chapter ten looks at a number of cryptanalytic attacks and problems (including the infamous weak and semi-weak keys of DES) and notes the protections provided in the design of Rijndael.  Cryptographic algorithms that made a contribution to, or are descended from, Rijndael are described in chapter eleven.

This book is intended for serious students of cryptographic algorithm design: it is highly demanding text, and requires a background in the formal study of number theory and logic.  Given that, it does provide some fascinating examination of both the advanced cryptanalytic attacks, and the design of algorithms to resist them.

copyright Robert M. Slade, 2009    BKDRJNDL.RVW   20091129

Share

Caller-ID spoof and voicemail

It’s easy to spoof caller-ID with some VoIP systems.  There are a few Websites that specifically allow it.  It’s a little harder, but geekier, to spoof or overflow caller-ID with a simple Bell 212A modem: it’s transmitted with that tech between the first and second rings of the phone.  (Since most people have caller-ID these days, many telcos don’t play you the first ring.  Since we don’t have caller-ID, we often get accused of answering the phone before it rings.)  (Of course, the rings you hear on the calling side aren’t necessarily the rings heard on the other end, but …)

Apparently AT&T allows immediate access to voicemail on the basis of caller-ID.

Apparently, with Android phones, it’s also gotten even easier to spoof caller-ID.

Share

REVIEW: “SSL and TLS: Theory and Practice”, Rolf Oppliger

BKSSLTTP.RVW   20091129

“SSL and TLS: Theory and Practice”, Rolf Oppliger, 2009, 978-1-59693-447-4
%A   Rolf Oppliger rolf.oppliger@esecurity.ch
%C   685 Canton St., Norwood, MA   02062
%D   2009
%G   978-1-59693-447-4 1-59693-447-6
%I   Artech House/Horizon
%O   617-769-9750 800-225-9977 artech@artech-house.com
%O   http://books.esecurity.ch/ssltls.html
%O  http://www.amazon.com/exec/obidos/ASIN/1596934476/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/1596934476/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/1596934476/robsladesin03-20
%O   Audience i+ Tech 3 Writing 2 (see revfaq.htm for explanation)
%P   257 p.
%T   “SSL and TLS: Theory and Practice”

The preface states that the book is intended to update the existing literature on SSL (Secure Sockets Layer) and TLS (Transport Layer Security), and to provide a design level understanding of the protocols.  (Oppliger does not address issues of implementation or specific products.)  The work assumes a basic understanding of TCP/IP, the Internet standards process, and cryptography, altough some fundamental cryptographic principles are given.

Chapter one is a basic introduction to security and some related concepts.  The author uses the definition of security architecture from RFC 2828 to provide a useful starting point and analogy.  The five security services listed in ISO 7498-2 and X.800 (authentication, access control, confidentiality, integrity, and nonrepudiation) are clearly defined, and the resultant specific and pervasive security mechanisms are mentioned.  In chapter two, Oppliger gives a brief overview of a number of cryptologic terms and concepts, but some (such as steganography) may not be relevant to examination of the SSL and TLS protocols.  (There is also a slight conflict: in chapter one, a secure system is defined as one that is proof against a specific and defined threat, whereas, in chapter two, this is seen as conditional security.)  The author’s commentary is, as in all his works, clear and insightful, but the cryptographic theory provided does go well beyond what is required for this topic.

Chapter three, although entitled “Transport Layer Security,” is basically a history of both SSL and TLS.  SSL is examined in terms of the protocols, structures, and messages, in chapter four.  There is also a quick analysis of the structural strength of the specification.
Since TLS is derived from SSL, the material in chapter five concentrates on the differences between SSL 3.0 and TLS 1.0, and then looks at algorithmic options for TLS 1.1 and 1.2.  DTLS (Datagram Transport Layer Security), for UDP (User Datagram Protocol), is described briefly in chapter six, and seems to simply add sequence numbers to UDP, with some additional provision for security cookie exchanges.  Chapter seven notes the use of SSL for VPN (virtual private network) tunneling.  Chapter eight reviews some aspects of
public key certificates, but provides little background for full implementation of PKI (Public Key Infrastructure).  As a finishing touch, chapter nine notes the sidejacking attacks, concerns about man-in-the-middle (MITM) attacks (quite germane, at the moment), and notes that we should move from certificate based PKI to a trust and privilege management infrastructure (PMI).

In relatively few pages, Oppliger has provided background, introduction, and technical details of the SSL and TLS variants you are likely to encounter.  The material is clear, well structured, and easily accessible.  He has definitely enhanced the literature, not only of TLS, but also of security in general.

copyright Robert M. Slade, 2009    BKSSLTTP.RVW   20091129

Share

REVIEW: “Cloud Security and Privacy”, Tim Mather/Subra Kumaraswamy/Shahed Latif

BKCLSEPR.RVW   20091113

“Cloud Security and Privacy”, Tim Mather/Subra Kumaraswamy/Shahed Latif, 2009, 978-0-596-802769, U$34.99/C$43.99
%A   Tim Mather
%A   Subra Kumaraswamy
%A   Shahed Latif
%C   103 Morris Street, Suite A, Sebastopol, CA   95472
%D   2009
%G   978-0-596-802769 0-596-802765
%I   O’Reilly & Associates, Inc.
%O   U$34.99/C$43.99 800-998-9938 707-829-0515 nuts@ora.com
%O  http://www.amazon.com/exec/obidos/ASIN/0596802765/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/0596802765/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0596802765/robsladesin03-20
%O   Audience i- Tech 1 Writing 1 (see revfaq.htm for explanation)
%P   312 p.
%T   “Cloud Security and Privacy”

The preface tells how the authors met, and that they were interested in writing a book on clouds and security.  It provides no definition of cloud computing.  (It also emphasizes an interest in being “first to market” with a work on this topic.)

Chapter one is supposed to be an introduction.  It is very brief, and, yet again, doesn’t say what a cloud is.  (The authors aren’t very careful about building background information: the acronym SPI is widely used and important to the book, but is used before it is defined.  It stands for Saas/Paas/Iaas, or software-as-a-service, platform-as-a-service, and infrastructure-as-a-service.  More simply, this refers to applications, management/development utilities, and storage.)  A delineation of cloud computing is finally given in chapter two, stating that it is characterized by multitenancy, scalability, elasticity, pay-as-you-go options, and self-provisioning.  (As these aspects are expanded, it becomes clear that the scalability, elasticity, and self-provisioning characteristics the authors describe are essentially the same thing: the ability of the user or client to manage the increase or decrease in services used.)  The fact that the authors do not define the term “cloud” becomes important as the guide starts to examine security considerations.  Interoperability is listed as a benefit of the cloud, whereas one of the risks is identified as
vendor lock-in: these two factors are inherently mutually exclusive.

Chapter three talks about infrastructure security, but the advice seems to reduce to a recommendation to review the security of the individual components, including Saas, Paas, and network elements, which seems to ignore the emergent risks arising from any complex environment.  Encryption is said to be only a small part of data security in storage, as addressed in chapter four, but most of the material discusses encryption.  The deliberation on cryptography is superficial: the authors have managed to include the very recent research on homomorphic encryption, and note that the field will advance rapidly, but do not mention that homomorphic encryption is only useful for a very specific subset of data representations.  The identity management problem is outlined in chapter five, and protocols for managing new systems are reviewed, but the issue of integrating these protocols with existing systems is not.  “Security management in the Cloud,” as examined in chapter six, is a melange of general security management and operations management, with responsibility flipping back and forth between the customer and the provider.  Chapter seven provides a very good overview of privacy, but with almost no relation to the cloud as such.  Audit and compliance standards are described in chapter eight: only one is directed at the cloud.  Various cloud service providers (CSP) are listed in chapter
nine.  The terse description of security-as-a-service (confusingly also listed as Saas), in chapter ten, is almost entirely restricted to spam and Web filtering.  The impact of the use of cloud technology is dealt with in chapter eleven.  It lists the pros and cons, but again,
some of the points are presented without noting that they are mutually exclusive.  Chapter twelve finishes off the book with a precis of the foregoing chapters.

The authors do raise a wide variety of the security problems and concerns related to cloud computing.  However, since these are the same issues that need to be examined in any information security scenario it is hard to say that any cloud-specific topics are addressed.  Stripped of excessive verbiage, the advice seems to reduce to a) know what you want, b) don’t make assumptions about what the provider provides, and c) audit the provider.

copyright Robert M. Slade, 2009    BKCLSEPR.RVW   20091113

Share

National Strategy for Trusted Identities in Cyberspace

There is no possible way this could potentially go wrong, right?

Doesn’t the phrase “Identity Ecosystem” make you feel all warm and “green”?

It’s a public/private partnership, right?  So there is no possibility of some large corporation taking over the process and imposing *their* management ideas on it?  Like, say, trying to re-introduce the TCPI?

And there couldn’t possibly be any problem that an identity management system is being run out of the US, which has no privacy legislation?

The fact that any PKI has to be complete, and locked down, couldn’t affect the outcome, could it?

There isn’t any possible need for anyone (who wasn’t a vile criminal) to be anonymous, is there?

Share

Privacy via lawsuit (vs security)

Interesting story about collecting data from Facebook.  I wonder if he would have had the same trouble if he had written the utility as a Facebook app, since apps are able to access all data from any user that runs them.  Maybe he could talk to the Farmville people, and collect everthing on pretty much every Facebook user.

All kinds of intriguing questions arise:

Has Facebook threatened to sue Google?  If they did, who has the bigger legal budget?

With all the embarrassing leaks, why doesn’t Facebook simply do some decent security, and set proper permissions?  (Oh, sorry.  I guess that’s a pretty stupid question.)

Does the legal concept of “community standards” apply to assumed technical standards such as robots.txt?  If nobody tests it in court, does any large corporation with a large legal budget get to rewrite the RFCs?

If you don’t get noticed, is it OK?  Does this mean that the blackhats, who try hard to stay undetected, are legally OK, and it’s only people who are working for the common good who are in trouble?

Share

Is it phish, or is it Amex?

I am a bit freaked.

Last month I received an email message from American Express.  I very nearly deleted it unread: it was obviously phish, right?  (I was teaching in Toronto that week, so I had even more reason to turf it unread rather than look at it.)

However, since I do have an Amex card, I decided to at least have a look at it, and possibly try and find some way to send it to them.  So I looked at it.

And promptly freaked out.

The phishers had my card number.  (Or, at least, the last five digits of it.)  They knew the due date of my statement.  The knew the balance amount of my last statement.

(The fact that this was all happening while I am aware from home wasn’t making me feel any more comfortable with it …)

So I had a look at the headers.  And couldn’t find a single thing indicating that this wasn’t from American Express.

(I had paid my bill before I left.  Or, at least, I *thought* I had.  So I checked my bank.  Sure enough, that balance had been paid a couple of days before.  However, I guess banks never actually transfer money on the weekend or something …)

A couple of days later I got another message: Amex was telling me that my payment was received.  That’s nice of them.  They were once again sending, in an unencrypted email message, the last five digits of my card number, and the last balance paid on my account.

Well, I figured that it might have been an experiment, and that they’d probably realize the error of their ways, and I didn’t necessarily need to point this out.  Apparently I was wrong on all counts, since I got another reminder message today.

Are these people completely unaware of the existence and risk of phishing?  Are they so totally ignorant of online security that they are encouraging their customers to be looking for legitimate email from a financial institution, thus increasing the risk of deception and fraud?

Going to their Website, I notice that there is now an “Account Alerts” function.  It may have been there for a while: I don’t know, since I’ve never used it.  Since I’ve never used it, I assume it was populated by default when they created it.  It seems to, by default, send you a payment due notice a week before the deadline, a payment received notice when payment is received, and a notice when you approach your credit limit.  (Fortunately, someone had the good sense not to automatically populate the option that sends you your statement balance every week.)  These options may be useful to some people.  But they should be options: they shouldn’t be sending a bunch of information about everybody’s account, in the clear, by default.

(There are, of course, “Terms and Conditions” applicable to this service, which basically say, as usual, that Amex isn’t responsible for much of anything, have warned you, and that you take all the risks arising from this function.  I find this heavily ironic, since I knew nothing of the service, don’t want it, and got it automatically.  I never even knew the “Terms and Conditions” existed, but in order to turn the service off I’ll have to read them.)

(In trying to send a copy of this to Amex, I note that their Website only lists phone and snailmail as contact options, you aren’t supposed to be able to send them email.)

Share

Robert Who?

As part of some research into the security risks of social networking, I did an ego search on myself.  (Hey, it’s legitimate research, all right?)

On Altavista, the first hit was the Wikipedia page someone created about me.  The second result was http://www.robertslade.com/ which I hadn’t known existed.  As well as correctly listing his published books, this page informed him that me that I was mentioned on the Wikipedia entry for the RISKS-Forum Digest (which is a definite ego boost).  It also provides a photograph of someone else.  As well as two pictures I didn’t take, and three videos I have nothing to do with.  Two different boxes provide links to buy books, some of which are mine, and most of which aren’t.

I expected to find entries that weren’t me: I know there are a lot of Robert Slades on the net.  But it’s a bit weird to find out that there is a domain about me that I didn’t know about.
I also found the church I’m buried in, so currently I’m not feeling too great …

Share

The achilles heel of the Internet

It won’t surprise you if I say the achilles heel of the Internet is passwords. But the problem is not that our passwords are too weak: in fact, the bigger problem is that our passwords are too strong.

Preventing brute force password attacks is a problem we know how to solve. The problem is that web service providers have bad habits that cause our passwords to be less secure. Remember the saying “the chain is only strong as the weakest link?” If you are strengthening an already strong link in the chain but weakening another, you are not improving security and usually decreasing the overall security of the system. Those “bad habits”, mostly of web services that require a login, are all wrapped in supposedly ‘security concerns’: meaning some security consultant fed the CSO a strict compliance document and by implementing these rigid security methods they are actually making their users less secure.

Here are some examples.

Don’t you remember who I am?
What’s the easiest way to fight phishing? Have the web site properly identify itself. When the bank calls, most people don’t ask the person on the other side of the line to prove they are really from the bank (though they really should). The reason is you assume that if they knew how to reach you, they are indeed your bank.

So why not do the same for phishing? The bank of America uses Sitekey, which is a really neat trick. But you don’t have to go that far: just remember my username and I’ll have more confidence that you are the right web site. In fact, if I see a login page that does not remember my username I’ll have to stop and think (since I typically don’t remember all the usernames) and that gives me more time to spot suspicious things about the page.

If you can tell me what my username is, there are higher chances you are the legitimate site. But some sites block my browser from remembering my username, on the excuse of increasing security. Well, they’re not.

Let me manage my passwords

This is where most financial sites really fight me – they work so hard to prevent the browser from remembering my passwords.

Why? I can see the point when I’m on a public terminal. But what if I’m using my own laptop? By letting my browser remember the password I am decreasing the chance of phishing, and in fact if I know for certain a web site will let me remember the password (rather than force to type it in) I select a strong, complicated password – since I don’t have to remember it. In some cases I even stick with the random-assigned password; I don’t care as long as my browser remembers it.

But some people are stuck with “security!=usability” equation. They are wrong; in many cases usability increases security. This is one of those cases.

Not to mention they will almost always lose the fight. If paypal won’t let firefox remember the password, I’ll find ways around it. Or maybe I’ll just write a post-it note and put it on my monitor. All of those ways are less secure than firefox’s built-in password manager.

Oh, and forcing me to choose a strong password (‘strong’ being something absurd and twisted that makes no security sense)? Good luck with that. I don’t really mind these silly efforts just because they are so easy to circumvent they are not even a bother anymore. But just remember that putting security measures in place that will be circumvented by 90% of your users means teaching them not to take your security seriously.

Stop blocking me
Next week I will have my annual conversation with the Lufthansa ‘frequent flyer’ club support people. It’s a conversation I have at least once a year (sometimes more) when my login gets blocked.

Why does my login get blocked? Because I get the password wrong too many times. What’s “too many”? I wish I knew. Since I usually pretty much know what my password is, I get it right within 4-5 tries, so I guess Lufthansa blocks me after 3 or 4. I don’t know for sure, because I also need to guess my username (long story, lets just say Lufthansa has 2 sets of usernames and passwords and you need to match them up correctly). So the bottom line is that I get routinely blocked and need to call their office in Germany to release it.

Why are they blocking me? I’m guessing to prevent brute-force password attacks, and that’s a good thing. But why not release it automatically after a day? A week? An hour? Why not authenticate me some other way (e-mail)? I bet I can guess why: Because everybody that complains is told that “it’s due to security concerns”. Nobody can argue with that, can they? After all, security is the opposite of usability. Our goal as security professionals is to make our services not work, and hence infinitely secure.

So Lufthansa is losing my web site visit, which means less advertising money, and they are making me agitated which is not the right customer retention policy. Some credit card issuers like to do this a lot, which means I can’t login to see my credit card balance and watch if there is any suspicious activity. Now that’s cutting your nose off to spite your face.

Don’t encourage me to give out my password
How many web sites have my real twitter password? Must be over half a dozen, maybe more. If you are using any twitter client, you have given them your twitter username and password. If you are using twitterpic, or any of the other hundreds of web 2.0 that automatically tweet for you, they have your login credentials. Heck, even facebook has my twitter credentials – I bet Facebook can flood twitter in an instant if they decide to fight dirty.

Twitter wants me to use all these clients because it raises my twitter activity, and that’s ok. But there are plenty of single-sign-on methods out there, that are not too complicated, and are all more secure than spreading my real username and password all over the place. Even Boxee has my twitter login, which makes me think. If I was building a web 2.0 service and asked everyone who opens an account to give me their twitter login details – how many would do that just out of habit?
Giving my credentials is not necessarily a bad thing. Services like mint and pageonce are good because they make it unnecessary for me to login to all my financial web sites; the less I login the better: assuming these sites have better security than my own computer, I’d rather have them login to my financial accounts than me. This leap of faith is not for everyone – some will ask what happens if these startups go out of business. Cybercrime experts like Richard Stiennon will argue that an insider breach in one of those companies can be devastating. And of course Noam will say that until they’ve been scanned by Beyond Security he won’t give them any sensitive information. I agree with them all, and yet I use both Mint.com and PageOnce. So I guess it boils down to a personal judgment call. I personally think there’s value in these type of services.

Stick with passwords

One thing I am almost allergic to, is the “next thing to replace passwords”. Don’t give me USB tokens or credit-card sized authentication cards. SMS me if you must, but even that’s marginal. Don’t talk to me about new ideas to revolutionize logins. A non-trivial password along with a mechanism that blocks multiple replies (blocks for a certain period of time, not forever – got that Lufthansa?) is good enough. It’s not foolproof – a keylogger will defeat all of those methods, but those keylogging Trojans are also capable of modifying traffic so no matter what off-line method you use for authentication, the transaction itself will be modified and the account will be compromised. So Trojans is a war we have lost – lets admit that and move on. Any other threat can be stopped by simple and proper login policies that do not include making the user wish he never signed up for your service.
There are other password ideas out there. Bruce Schneier suggests to have passwords be displayed while typing them. I think that makes absolutely no sense for 99% of the people out there, but I do agree that we are fighting the wrong wars when it comes to passwords, and I think fresh thinking about passwords is a good thing. The current situation is that on one hand we are preventing our users from using passwords properly, and on the other hand we leaving our services open to attack. That doesn’t help anyone.

Share

Vanishingly small utility …

This system has had some discussion in the forensics world over the past few days.  Here’s an extract from Science Daily:

“Computers have made it virtually impossible to leave the past behind. College Facebook posts or pictures can resurface during a job interview. A lost cell phone can expose personal photos or text messages. A legal investigation can subpoena the entire contents of a home or work computer. The University of Washington has developed a way to make such information expire. After a set time period, electronic communications such as e-mail, Facebook posts and chat messages would automatically self-destruct, becoming irretrievable from all Web sites, inboxes, outboxes, backup sites and home computers. Not even the sender could retrieve them.

“The team of UW computer scientists developed a prototype system called Vanish that can place a time limit on text uploaded to any Web service through a Web browser.

[Perhaps a bit narrower focus than the original promise, but it is a prototype - rms]

“After a set time text written using Vanish will, in essence, self-destruct.  The Vanish prototype washes away data using the natural turnover, called “churn,” on large file-sharing systems known as peer-to-peer networks. For each message that it sends, Vanish creates a secret key, which it never reveals to the user, and then encrypts the message with that key. It then divides the key into dozens of pieces and sprinkles those pieces on random computers that belong to worldwide file-sharing networks. The file-sharing system constantly changes as computers join or leave the network, meaning that over time parts of the key become permanently inaccessible. Once enough key parts are lost, the original message can no longer be deciphered.”

However, given the promise to clean up social networking sites, and as I started to read the paper, an immediate problem occurred to me.  And, lo and hehold, the authors admit it:

“We therefore focus our threat model and subsequent analyses on attackers who wish to compromise data privacy. Two key properties of our threat model are:
1. Trusted data owners. Users with legitimate access to the same VDOs trust each other.
2. Retroactive attacks on privacy. Attackers do not know which VDOs they wish to access until after the VDOs expire.
The former aspect of the threat model is straightforward, and in fact is a shared assumption with traditional encryption schemes: it would be impossible for our system to protect against a user who chooses to leak or permanently preserve the cleartext contents of a VDO-encapsulated file through out-of-band means. For example, if Ann sends Carla a VDO-encapsulated email, Ann must trust Carla not to print and store a hard-copy of the email in cleartext.”

So, this system works perfectly.  If you only communicate with people you trust (both in terms of intent, and competence), and who only use the system properly, and never use any of the information in any program that is not part of the system, it’s completely secure.

How often have we heard that said?

The default to privacy aspect is interesting, and the automatic transparency for the user as well, but this simply moves the problem one step back, as it were.  In terms of utility to social networking, the social networks would have to be completely rewritten to adher to the system, and even then it would be pretty much impossible to ensure that nobody would have the ability to scrape data and keep or publish it elsewhere.

(Plus, the data is still there, and so is Moore’s Law …)

Share

Elance user information compromised

God bless the law that forces companies to disclose when they are hacked and customer information is compromised. Not only do we get a chance to protect ourselves but it also reminds us that this apparently happens more often then we would think.

This time it’s elance.com:

Dear (my account name),
We recently learned that certain Elance user information was accessed without authorization, including potentially yours. The data accessed was contact information — specifically name, email address, telephone number, city location and Elance login information (passwords were protected with encryption). This incident did NOT involve any credit card, bank account, social security or tax ID numbers.
We have remedied the cause of the breach and are working with appropriate authorities. We have also implemented additional security measures and have strengthened password requirements to protect all of our users.
We sincerely regret any inconvenience or disruption this may cause.
If you have any unanswered questions and for ongoing information about this matter, please visit this page in our Trust & Safety center: http://www.elance.com/p/trust/account_security.html
For information on re-setting your password, visit: http://help.elance.com/forums/30969/entries/47262
Thank you for your understanding,
Michael Culver
Vice President
Elance

What I would like to see, is what “additional security measures” are they really taking. Also (and I’ll admit I have a one-track-mind) did they do a proper security scan to ensure the servers don’t have any holes? What were the results?

Share