CyberSec Tips: Follow the rules – and advice

A recent story (actually based on one from several years ago) has pointed out that, for years, the launch codes for nuclear missiles were all set to 00000000.  (Not quite true: a safety lock was set that way.)

Besides the thrill value of the headline, there is an important point buried in the story.  Security policies, rules, and procedures are usually developed for a reason.  In this case, given the importance of nuclear weapons, there is a very real risk from a disgruntled insider, or even simple error.  The safety lock was added to the system in order to reduce that risk.  And immediately circumvented by people who didn’t think it necessary.

I used to get asked, a lot, for help with malware infestations, by friends and family.  I don’t get asked much anymore.  I’ve given them simple advice on how to reduce the risk.  Some have taken that advice, and don;t get hit.  A large number of others don’t ask because they know I will ask if they’ve followed the advice, and they haven’t.

Security rules are usually developed for a reason, after a fair amount of thought.  This means you don’t have to know about security, you just have to follow the rules.  You may not know the reason, but the rules are actually there to keep you safe.  It’s a good idea to follow them.

 

(There is a second point to make here, addressed not to the general public but to the professional security crowd.  Put the thought in when you make the rules.  Don’t make stupid rules just for the sake of rules.  That encourages people to break the stupid rules.  And the necessity of breaking the stupid rules encourages people to break all the rules …)

Share

Western society is WEIRD [1]

(We have the OT indicator to say that something is off topic.  This isn’t, because ethics and sociology is part of our profession, but it is a fairly narrow area of interest for most.  We don’t have a subject-line indicator for that  :-)

This article, and the associated paper, are extremely interesting in many respects.  The challenge to whole fields of social factors (which are vital to proper management of security) has to be addressed.  We are undoubtedly designing systems based on a fundamentally flawed understanding of the one constant factor in our systems: people.

(I suppose that, as long as the only people we interact with are WEIRD [1] westerners, we are OK.  Maybe this is why we are flipping out at the thought of China?)

(I was particularly interested in the effects of culture on actual physical perception, which we have been taught is hard wired.)

[1] – WEIRD, in the context of the paper, stands for Western, Educated, Industrialized, Rich, and Democratic societies

Share

Using Skype Manager? no? Expect incoming fraud

I have been using Skype ever since it came out, so I know my stuff.

I know how to write strong passwords, how to use smart security questions and how to – most importantly – avoid Phishing attempts on my Skype account.

But all that didn’t help me avoid a Skype mishap (or more bluntly as a friend said – Skype f*ckup).

It all started Saturday late at night (about 2am GMT), when I started receiving emails in Mandarin from Skype, my immediate thought was fraud, a phishing attempt, so I ignored it. But then I noticed I got also emails from Paypal with charges from Skype for 100$ 200$ 300$, and I was worried, was my account hacked?

I immediately went to PayPal and disconnected my authorization to Skype, called in Transaction Dispute on PayPal and then went on to look at my Skype account.

I looked into the recent logons to my account – nothing.

I looked into email changes, or passwords – nothing.

I couldn’t figure out how the thing got to where it was, and then I noticed, I have become a Skype Manager – wow I was promoted and I didn’t even send in my CV.

Yeah, joke aside, Skype Manager, is a service Skype gives to businesses to allow one person to buy Skype Credit and other people to use that Credit to make calls. A great idea, but the execution is poor.

The service appears to have been launched in 2012, and a few weeks after that, fraud started popping up. The how is very simple and so stupid it shameful for Skype to not have fixed this, since it was first reported (which I found) on the 21st of Jan 2012 on the Skype forum.

Apparently having this very common combinations of:
1) Auto-charge PayPal
2) Never used Skype Manager
3) Never setup a Work email for Skype

Makes it possible for someone to:
1) Setup you as a Skype Manager
2) Setup a new work email on some obscure service (mailinator was used in my case), and have all Skype emails for confirmations sent there

Yes, they don’t need to know anything BESIDE the Skype Call name of your account – which is easy to get using Skype Search.

Once you have become a Skype Manager, “you” can add users to the group you are managing – they don’t need to logon as all they need to do is use the (email) link you get to the newly assigned Work Email, yes, it doesn’t confirm the password – smart ha?

The users added to your Skype Manager can now take the Credit (its not money, it just call credits) and call anywhere they want.

Why this bug / feature not been fixed/addressed since the first time it was made public on the Skype Forum (probably was exploited before then), is anyone’s guess, talking to the Fraud department of Skype – he mainly stated that I should:
1) Change my password for Skype – yes, that would have helped nothing in this case
2) Make sure I authorize Skype only on trustworthy devices

The bottom line, Skype users, make sure:
1) You have configured your Skype Manager – if you are using Auto-Charge feature – I have disabled my Auto-Charge and PayPal authorization since then, and don’t plan on enabling it anytime (ever)
2) You have configured your Skype Work email – yes, if its unset, anyone can change it – without needing to know your current password – is this company a PCI authorized company? :D

If you have more insight on the matter, let me know

- Noam

Share

REVIEW: “Liars and Outliers: Enabling the Trust that Society Needs to Thrive”, Bruce Schneier

BKLRSOTL.RVW   20120104

“Liars and Outliers: Enabling the Trust that Society Needs to Thrive”,
Bruce Schneier, 2012, 978-1-118-14330-8, U$24.95/C$29.95
%A   Bruce Schneier www.Schneier.com
%C   5353 Dundas Street West, 4th Floor, Etobicoke, ON   M9B 6H8
%D   2012
%G   978-1-118-14330-8 1-118-14330-2
%I   John Wiley & Sons, Inc.
%O   U$24.95/C$29.95 416-236-4433 fax: 416-236-4448 www.wiley.com
%O  http://www.amazon.com/exec/obidos/ASIN/1118143302/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/1118143302/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/1118143302/robsladesin03-20
%O   Audience n+ Tech 2 Writing 3 (see revfaq.htm for explanation)
%P   365 p.
%T   “Liars and Outliers: Enabling the Trust that Society Needs to
Thrive”

Chapter one is what would ordinarily constitute an introduction or preface to the book.  Schneier states that the book is about trust: the trust that we need to operate as a society.  In these terms, trust is the confidence we can have that other people will reliably behave in certain ways, and not in others.  In any group, there is a desire in having people cooperate and act in the interest of all the members of the group.  In all individuals, there is a possibility that they will defect and act against the interests of the group, either for their own competing interest, or simply in opposition to the group.  (The author notes that defection is not always negative: positive social change is generally driven by defectors.)  Actually, the text may be more about social engineering, because Schneier does a very comprehensive job of exploring how confident we can be about trust, and they ways we can increase (and sometimes inadvertantly decrease) that reliability.

Part I explores the background of trust, in both the hard and soft sciences.  Chapter two looks at biology and game theory for the basics.  Chapter three will be familiar to those who have studied sociobiology, or other evolutionary perspectives on behaviour.  A historical view of sociology and scaling makes up chapter four.  Chapter five returns to game theory to examine conflict and societal dilemmas.

Schneier says that part II develops a model of trust.  This may not be evident at a cursory reading: the model consists of moral pressures, reputational pressures, institutional pressures, and security systems, and the author is very careful to explain each part in chapters seven through ten: so careful that it is sometimes hard to follow the structure of the arguments.

Part III applies the model to the real world, examining competing interests, organizations, corporations, and institutions.  The relative utility of the four parts of the model is analyzed in respect to different scales (sizes and complexities) of society.  The author also notes, in a number of places, that distrust, and therefore excessive institutional pressures or security systems, is very expensive for individuals and society as a whole.

Part IV reviews the ways societal pressures fail, with particular emphasis on technology, and information technology.  Schneier discusses situations where carelessly chosen institutional pressures can create the opposite of the effect intended.

The author lists, and proposes, a number of additional models.  There are Ostrom’s rules for managing commons (a model for self-regulating societies), Dunbar’s numbers, and other existing structures.  But Schneier has also created a categorization of reasons for defection, a new set of security control types, a set of principles for designing effective societal pressures, and an array of the relation between these control types and his trust model.  Not all of them are perfect.  His list of control types has gaps and ambiguities (but then, so does the existing military/governmental catalogue).  In his figure of the feedback loops in societal pressures, it is difficult to find a distinction between “side effects” and “unintended consequences.”  However, despite minor problems, all of these paradigms can be useful in reviewing both the human factors in security systems, and in public policy.

Schneier writes as well as he always does, and his research is extensive.  In part one, possibly too extensive.  A great many studies and results are mentioned, but few are examined in any depth.  This does not help the central thrust of the book.  After all, eventually Schneier wants to talk about the technology of trust, what works, and what doesn’t.  In laying the basic foundation, the question of the far historical origin of altruism may be of academic philosophical interest, but that does not necessarily translate into an
understanding of current moral mechanisms.  It may be that God intended us to be altruistic, and therefore gave us an ethical code to shape our behaviour.  Or, it may be that random mutation produced entities that acted altruistically and more of them survived than did others, so the population created expectations and laws to encourage that behaviour, and God to explain and enforce it.  But trying to explore which of those (and many other variant) options might be right only muddies the understanding of what options actually help us form a secure society today.

Schneier has, as with “Beyond Fear” (cf. BKBYNDFR.RVW) and “Secrets and Lies” (cf. BKSECLIE.RVW), not only made a useful addition to the security literature, but created something of value to those involved with public policy, and a fascinating philosophical tome for the general public.  Security professionals can use a number of the models to assess controls in security systems, with a view to what will work, what won’t (and what areas are just too expensive to protect).  Public policy will benefit from examination of which formal structures are likely to have a desired effect.  (As I am finishing this review the debate over SOPA and PIPA is going on: measures unlikely to protect intellectual property in any meaningful way, and guaranteed to have enormous adverse effects.)  And Schneier has brought together a wealth of ideas and research in the fields of trust and society, with his usual clarity and readability.

copyright, Robert M. Slade   2011     BKLRSOTL.RVW   20120104

Share

Verizon data breach report

Interesting report by Verizon. Highlights:

  • External attacks are up 22% and are now responsible for 92% of losses.
  • Insider attack is down 31%. (Finally implementing internal security measures and not just focusing on the perimeter?)
  • Victims were not ‘chosen’ because they were large, important or had financial data. They were simply the easiest targets.
  • 92% of loss resulted from simple, known vulnerabilities

The conclusions sound a lot like the Gartner report:

“Every year that we study threat actions leading to data breaches, the story is the same; most victims aren’t overpowered by unknowable and unstoppable attacks. For the most part, we know them well enough and we also know how to stop them.”

And here’s the same thing in different wording:

“The latest round of evidence leads us to the same conclusion as before: your security woes are not caused by the lack of something new. They almost surely have more to do with not using, under using, or misusing something old.”

And of course, I like this one because it highlights Automated Vulnerability Assessment:

“SQL injection attacks, cross-site scripting, authentication bypass, and exploitation of session variables contributed to nearly half of breaches attributed to hacking or network intrusion. It is no secret that attackers are moving up the stack and targeting the application layer. Why don’t our defenses follow suit? As with everything else, put out the fires first: even lightweight web application scanning and testing would have found many of the problems that led to major breaches in the past year.”

Basically, your organization already has the security solution that it needs; you’re just not using it.

Share

If you don’t want people to know, then shut up.

The CIA is complaining that news media and other entities are giving away information about it’s agents and operations.

Trouble is, the information being analysed has been provided by the CIA.

If the CIA is being too eager to promote themselves, or careless in censoring the material they do provide, is that the fault of the media?

In doing the CISSP seminars, I use lots of security war stories.  Some of them are from my own work.  Some of them I’ve collected from the attendees over the years.  It’s not hard to use the story to make a point, but leave absolutely no clues as to the company involved, let alone individuals.

Share

Gartner on Vulnerability Assessment

For years, Gartner has been recommending VA/VM as the effective way to prevent successful attacks, only they’ve been a bit too low key about it in my opinion. Of course as a VA vendor I’m not even going to pretend to be objective here, but I always wondered if the fact most leading vendors are relatively small made Gartner pay less attention to the field.

Whatever the reason was, Gartner just came out with Strategies for Dealing with the Increase in Advanced Targeted Threats.
Here are some nice quotes; I especially liked the one about 0-days. I’m in complete agreement with all of them:

Quotes from this article (emphasize is mine):

Enterprises need to focus on reducing vulnerabilities

” There are existing security technologies that can greatly reduce vulnerability to targeted attacks.”

” … the real issue [is] focusing on the vulnerabilities that the attackers are exploiting. “

The reality is that the most important issues are the vulnerabilities and the techniques used to exploit them, not the country that appears to be the source of the attack”

Own the vulnerability; don’t blame the threat: There are no unstoppable forces in cyber attacks” (this one should be printed on T-shirts).

“If IT leaders close the vulnerability, then they stop the curious teenager, the experimental hacker, the cybercriminal and the information warrior”

“Many attacks that include zero-day exploits often use well-known vulnerabilities as part of the overall attacks.”

Share

A recent flight …

Security wanted to open up my suitcase and look at the bag of chargers, USB sticks, etc, and was concerned about the laser pointers.  He decided they were pens, and I didn’t disabuse him of the notion.  Why disturb the tranquility of his ignorance?

Share

HDCP Master Key Leaked

High-bandwidth Digital Content Protection (HDCP) is a form of copyright protection developed by Intel. It is designed to prevent the copying of digital audio and video as it travels accross media interfaces such as HDMI, DisplayPort or Unified Display Interface (UDI).

The system is meant to stop HDCP-encrypted content from being played on devices that do not support HDCP or which have been modified to copy HDCP content. Before sending data, a transmitting device checks that the receiver is authorized to receive it. If so, the transmitter encrypts the data to prevent eavesdropping as it flows to the receiver.

Manufacturers who want to make a device that supports HDCP must obtain a license from Intel subsidiary Digital Content Protection, pay an annual fee, and submit to various conditions.

On 14th September 2010 the HDCP Master Key was somehow leaked, and published online in various sources. At present it is unknown how this Master Key was obtained, or whether Intel is doing any investigations as to how this happened. Intel has however threatened to sue anyone.

The leaked master key is used to create all the lower level keys that are stored within devices, so you can see what a nightmare this must be for Intel.

Intel have threatened to sue anyone that makes use of this key under intellectual property laws. However it will now only be a matter of time before we start seeing black market devices appearing.

If anyone’s at all interested though, you can find the key here.

Share

Reflections on Trusting Trust goes hardware

A recent Scientific American article does point out that is is getting increasingly difficult to keep our Trusted Computing Base sufficiently small.

For further information on this scenario, see: http://www.imdb.com/title/tt0436339/  [1]

We actually discussed this in the early days of virus research, and sporadically since.  The random aspect (see Dell problems with bad chips) (the stories about malware on the boards is overblown, since the malware was simply stored in unused memory, rather than being in the BIOS or other boot ROM) is definitely a problem, but a deliberate attack is problematic.  The issue lies with hundreds of thousands of hobbyists (as well as some of the hackers) who poke and prod at everything.  True, the chance of discovering the attack is random, but so is the chance of keeping the attack undetected.  It isn’t something that an attacker could rely upon.

Yes, these days there are thousands of components, being manufactured by hundreds of vendors.  However, note various factors that need to be considered.

First of all, somebody has to make it.  Most major chips, like CPUs, are a combined effort.  Nobody would be able to make and manufacture a major chip all by themselves.  And, in these days of tight margins and using every available scrap of chip “real estate,” someone would be bound to notice a section of the chip labeled “this space intentionally left blank.”  The more people who are involved, the more likely someone is going to spill the beans, at the very least about an anomaly on the chip, whether or not they knew what it did.  (Once the word is out that there is an anomaly, the lifespan of that secret is probably about three weeks.)

Secondly, there is the issue of the payload.  What can you make it do?  Remember, we are talking components, here.  This means that, in order to make it do anything, you are generally going to have to rely on whatever else is in the device or system in which your chip has been embedded.  You cannot assume that you will have access to communications, memory, disk space, or pretty much anything else, unless you are on the CPU.  Even if you are on the CPU, you are going to be limited.  Do you know what you are?  Are you a computer? Smartphone?  iPod?  (If the last, you are out of luck, unless you want to try and drive the user slowly insane by refusing to play anything except Barry Manilow.)  If you are a computer, do you know what operating system you are running?  Do you know the format of any disk connected to you?  The more you have to know how to deal with, the more programming has to be built into you, and remember that real estate limitation.  Even if all you are going to do is shut down, you have to have access to communications, and you have to a) be able to watch all the traffic, and b) watch all the traffic, without degrading performance while doing so.  (OK, true, it could just be a timer.  That doesn’t allow the attacker a lot of control.)

Next, you have to get people to use your chips.  That means that your chips have to be as cheap as, or cheaper than, the competition.  And remember, you have to use up chip real estate in order to have your payload on the chip.  That means that, for every 1% of chip space you use up for your programming, you lose 1% of manufacturing capacity.  So you have to have deep pockets to fund this.  Your chip also has to be at least as capable as the competition.  It also has to be as reliable as the competition.  You have to test that the payload you’ve put in place does not adversely affect performance, until you tell it to.  And you have to test it in a variety of situations and applications.  All the while making sure nobody finds out your little secret.

Next, you have to trigger your attack.  The trigger can’t be something that could just happen randomly.  And remember, traffic on the Internet, particularly with people streaming videos out there, can be pretty random.  Also remember that there are hundreds of thousands of kids out there with nothing better to do than try to use their computers, smartphones, music players, radio controlled cars, and blenders in exactly the way they aren’t supposed to.  And several thousand who, as soon as something odd happens, start trying to figure out why.

Bad hardware definitely is a threat.  But the largest part of that threat is simply the fact that cheap manufacturers are taking shortcuts and building unreliable components.  If I was an attacker, I would definitely be able to find easier ways to mess up the infrastructure than by trying to create attack chips.

[1] Get it some night when you can borrow it, for free, from your local library DVD collection.  On an evening when you don’t want to think too much.  Or at all.  WARNING: contains jokes that six year olds, and most guys, find funny.

Share

Forensics & The Fabled Chain Of Custody

I’m not very big into forensics any more, but occasionally I’ll get asked to take on a case or two, and whenever I do, the one thing that people always manage to seem to get wrong is the chain of custody.

Now for those of you who have no idea what I’m talking about here, here is the blurb from Wikipedia on Chain Of Custody.

Chain of custody (CoC) refers to the chronological documentation or paper trail, showing the seizure, custody, control, transfer, analysis, and disposition of evidence, physical or electronic. Because evidence can be used in court to convict persons of crimes, it must be handled in a scrupulously careful manner to avoid later allegations of tampering or misconduct which can compromise the case of the prosecution toward acquittal or to overturning a guilty verdict upon appeal. The idea behind recording the chain of custody is to establish that the alleged evidence is in fact related to the alleged crime, rather than having, for example, been planted fraudulently to make someone appear guilty.”

I have seen so many cases through the years, where a single has just gone and asked a user to please shutdown their PC, and then taken it away from them, jumped in a cab, and as it was late, taken the PC home with them for the night. Then the next morning, they’ll walk into my office and ask me to do forensics on the host, as the user in question has been doing x,y and z wrong on company property and they want to fire them and prosecute. It’s very hard trying to explain to senior management, that while, I can do the forensics for you, and I’m sure that I’ll find something, can you please just prove to me that you didn’t put it there to frame the person? This usually results with the same old conversation, that kind of goes along these lines.

Manager: “Of course I didn’t put it there! I’m a senior manager, why would I do that, what do I stand to gain?”

Me: “Well, it could be that you just don’t like this person, or on a personal level, they’ve done something to upset you”

Manager: “Well, I’m telling you that I didn’t put anything on his PC, and I’m a senior manager! So get started with the forensics asap, and let me know!”

Me: “You seem very defensive, it sounds like you may be hiding something?”

Manager: “I am not hiding anything, I just want you to prove that he was doing something wrong so that I can fire him and then get legal to prosecute!”

Me: “Okay, I’ll do what I’ve been asked. Just remember though, I’m a IT Security guy, and you sound guilty to me, even though you may not be, imagine what a lawyer would do with you? We have forensics procedures, that are visible to the entire company in regards to bringing in user’s PC’s, next time can you please take the time to read these?”

The senior manager then usually storms out of the office.

Following proper procedures for forensics purposes is of the utmost importance, as if you do need to lay charges you need to be able to prove that you did everything by the book. If you don’t have detailed procedures for your in-house forensics, maybe now is the time to start thinking about writing some…

Share

National Strategy for Trusted Identities in Cyberspace

There is no possible way this could potentially go wrong, right?

Doesn’t the phrase “Identity Ecosystem” make you feel all warm and “green”?

It’s a public/private partnership, right?  So there is no possibility of some large corporation taking over the process and imposing *their* management ideas on it?  Like, say, trying to re-introduce the TCPI?

And there couldn’t possibly be any problem that an identity management system is being run out of the US, which has no privacy legislation?

The fact that any PKI has to be complete, and locked down, couldn’t affect the outcome, could it?

There isn’t any possible need for anyone (who wasn’t a vile criminal) to be anonymous, is there?

Share

KHOBE: Say hello to my little friend(*)

Guess what? You personal firewall/IDS/Anti Virus/(insert next month’s buzzword here) isn’t going to save you from an attacker successfully executing code remotely on your machine:
http://www.zdnet.com/blog/hardware/update-new-attack-bypasses-every-windows-security-product/8268

So no, it’s not the doomsday weapon, but definitely worthy of the Scarface quote in the title.
This isn’t surprising, researchers find ways to bypass security defenses almost as soon as those defenses are implemented (remember non-executable stack?). Eliminating vulnerabilities in the first place is the way to go, guys, not trying to block attacks hoping your ‘shields’ hold up.

(*) If you’re reading this out loud you need to do so in a thick cuban accent

Share

Printers, the forgotten threat.

It seems that in this day and age, people have finally grasped the concepts of why it’s a good idea to patch systems regularly, run an anti-virus application, and have funky network appliances like firewalls and Intrusion Detection Systems. Which is a really great move in the right direction.

One thing that I will never understand though is that people will spend a fortune on new security tools and appliances, adn they’ll forget the basics.

Please people, remember to lock down the items on your network that may seem insignificant to you, as nine out of ten times, they are a foothold for a hacker. A prime example of this would be printers, I have managed to obtain really sensitive information off of printers attached to networks in their default state in the past, and also waste valuable time and company resources.

Here are few of the things that i’ve done on various assignments over the years in regards to printers:

- Modify the default web console pages, and load them up with browser exploits

- Find valuabe documents saved as files on the printers

- Use the printers as zombie hosts for nmap zombie network scans

- Tie up the printer for a day or so printing out the contents of my hard drive

- Waste paper and ink from doing the above

- Leave obscene messages on the console display
- Shut down the printer and fake the logon page to accomplish all of the above

Here’s a pretty useful link for all those with HP printers on their estate as well.
So in going forward, please remember that if it’s attached to your network, it needs to be secured. Most printers these days come with security configuration options, but they have to be enabled, so take the extra 5 minutes to make the world a better place.

Share

LinkedIn as a recruitment resource

I’m working on an article about the risks in social networking right now, and I’ve come across yet another blog posting about how to use LinkedIn (and Facebook, and Twitter, etc.) to look for job candidates.

I’ve never quite been able to figure out the attraction of using LinkeDin as a source of employment candidates.  The one thing you know about active socnet users is that they are active socnet users.  If you are at all concerned about your employees wasting time at work, you know right off the top that this is a person who will do that.

Of course, if your company is trying to “get into” the socnet world, you might think this is a good thing.  But it’s quite a leap of faith to think they would do it for you, rather than themselves.

(For us in infosec, there would be the added concern that this person is either telling way too much about themselves, or “tailoring” the facts.  So you either have a failure of confidentiality, or integrity.)

Share

The achilles heel of the Internet

It won’t surprise you if I say the achilles heel of the Internet is passwords. But the problem is not that our passwords are too weak: in fact, the bigger problem is that our passwords are too strong.

Preventing brute force password attacks is a problem we know how to solve. The problem is that web service providers have bad habits that cause our passwords to be less secure. Remember the saying “the chain is only strong as the weakest link?” If you are strengthening an already strong link in the chain but weakening another, you are not improving security and usually decreasing the overall security of the system. Those “bad habits”, mostly of web services that require a login, are all wrapped in supposedly ‘security concerns’: meaning some security consultant fed the CSO a strict compliance document and by implementing these rigid security methods they are actually making their users less secure.

Here are some examples.

Don’t you remember who I am?
What’s the easiest way to fight phishing? Have the web site properly identify itself. When the bank calls, most people don’t ask the person on the other side of the line to prove they are really from the bank (though they really should). The reason is you assume that if they knew how to reach you, they are indeed your bank.

So why not do the same for phishing? The bank of America uses Sitekey, which is a really neat trick. But you don’t have to go that far: just remember my username and I’ll have more confidence that you are the right web site. In fact, if I see a login page that does not remember my username I’ll have to stop and think (since I typically don’t remember all the usernames) and that gives me more time to spot suspicious things about the page.

If you can tell me what my username is, there are higher chances you are the legitimate site. But some sites block my browser from remembering my username, on the excuse of increasing security. Well, they’re not.

Let me manage my passwords

This is where most financial sites really fight me – they work so hard to prevent the browser from remembering my passwords.

Why? I can see the point when I’m on a public terminal. But what if I’m using my own laptop? By letting my browser remember the password I am decreasing the chance of phishing, and in fact if I know for certain a web site will let me remember the password (rather than force to type it in) I select a strong, complicated password – since I don’t have to remember it. In some cases I even stick with the random-assigned password; I don’t care as long as my browser remembers it.

But some people are stuck with “security!=usability” equation. They are wrong; in many cases usability increases security. This is one of those cases.

Not to mention they will almost always lose the fight. If paypal won’t let firefox remember the password, I’ll find ways around it. Or maybe I’ll just write a post-it note and put it on my monitor. All of those ways are less secure than firefox’s built-in password manager.

Oh, and forcing me to choose a strong password (‘strong’ being something absurd and twisted that makes no security sense)? Good luck with that. I don’t really mind these silly efforts just because they are so easy to circumvent they are not even a bother anymore. But just remember that putting security measures in place that will be circumvented by 90% of your users means teaching them not to take your security seriously.

Stop blocking me
Next week I will have my annual conversation with the Lufthansa ‘frequent flyer’ club support people. It’s a conversation I have at least once a year (sometimes more) when my login gets blocked.

Why does my login get blocked? Because I get the password wrong too many times. What’s “too many”? I wish I knew. Since I usually pretty much know what my password is, I get it right within 4-5 tries, so I guess Lufthansa blocks me after 3 or 4. I don’t know for sure, because I also need to guess my username (long story, lets just say Lufthansa has 2 sets of usernames and passwords and you need to match them up correctly). So the bottom line is that I get routinely blocked and need to call their office in Germany to release it.

Why are they blocking me? I’m guessing to prevent brute-force password attacks, and that’s a good thing. But why not release it automatically after a day? A week? An hour? Why not authenticate me some other way (e-mail)? I bet I can guess why: Because everybody that complains is told that “it’s due to security concerns”. Nobody can argue with that, can they? After all, security is the opposite of usability. Our goal as security professionals is to make our services not work, and hence infinitely secure.

So Lufthansa is losing my web site visit, which means less advertising money, and they are making me agitated which is not the right customer retention policy. Some credit card issuers like to do this a lot, which means I can’t login to see my credit card balance and watch if there is any suspicious activity. Now that’s cutting your nose off to spite your face.

Don’t encourage me to give out my password
How many web sites have my real twitter password? Must be over half a dozen, maybe more. If you are using any twitter client, you have given them your twitter username and password. If you are using twitterpic, or any of the other hundreds of web 2.0 that automatically tweet for you, they have your login credentials. Heck, even facebook has my twitter credentials – I bet Facebook can flood twitter in an instant if they decide to fight dirty.

Twitter wants me to use all these clients because it raises my twitter activity, and that’s ok. But there are plenty of single-sign-on methods out there, that are not too complicated, and are all more secure than spreading my real username and password all over the place. Even Boxee has my twitter login, which makes me think. If I was building a web 2.0 service and asked everyone who opens an account to give me their twitter login details – how many would do that just out of habit?
Giving my credentials is not necessarily a bad thing. Services like mint and pageonce are good because they make it unnecessary for me to login to all my financial web sites; the less I login the better: assuming these sites have better security than my own computer, I’d rather have them login to my financial accounts than me. This leap of faith is not for everyone – some will ask what happens if these startups go out of business. Cybercrime experts like Richard Stiennon will argue that an insider breach in one of those companies can be devastating. And of course Noam will say that until they’ve been scanned by Beyond Security he won’t give them any sensitive information. I agree with them all, and yet I use both Mint.com and PageOnce. So I guess it boils down to a personal judgment call. I personally think there’s value in these type of services.

Stick with passwords

One thing I am almost allergic to, is the “next thing to replace passwords”. Don’t give me USB tokens or credit-card sized authentication cards. SMS me if you must, but even that’s marginal. Don’t talk to me about new ideas to revolutionize logins. A non-trivial password along with a mechanism that blocks multiple replies (blocks for a certain period of time, not forever – got that Lufthansa?) is good enough. It’s not foolproof – a keylogger will defeat all of those methods, but those keylogging Trojans are also capable of modifying traffic so no matter what off-line method you use for authentication, the transaction itself will be modified and the account will be compromised. So Trojans is a war we have lost – lets admit that and move on. Any other threat can be stopped by simple and proper login policies that do not include making the user wish he never signed up for your service.
There are other password ideas out there. Bruce Schneier suggests to have passwords be displayed while typing them. I think that makes absolutely no sense for 99% of the people out there, but I do agree that we are fighting the wrong wars when it comes to passwords, and I think fresh thinking about passwords is a good thing. The current situation is that on one hand we are preventing our users from using passwords properly, and on the other hand we leaving our services open to attack. That doesn’t help anyone.

Share