Easy login into Korean Point-of-Sale device

Some things are cross-culture it seems. Especially when it comes to trivial security mishaps.
So I’m at a PoS terminal in a large department store in Seoul and while I’m waiting for the register to ring up my order, I look at the touchscreen where I will be asked for my signature in a moment. I notice a little icon that looks like ‘settings’. How can I not click on it?

Initial PoS screen
Oh, it needs a password. Must be this PCI compliance thing everybody is raving about. And no, wiseass, 1-2-3-4-5 doesn’t work.

Asking for password

…But 1-2-3-4 does.

Password

Yup. Unlocked.
Now I need to polish up my Korean to figure out what to do next. Suggestions?

Menu Screen

Sorry for the full disclosure guys. And that includes all of you that now need to change your luggage combination.

Share

Get trained for emergencies

I’ve mentioned this before.

We seem to have had a number of disasters this year: earthquakes, tsunami, a few hurricanes (with one currently sweeping Japan, and another building right now off the east coast of the US), wildfires, you name it.  In the US, this is National Preparedness Month.

So this is a good time to get trained.  It gets you CPEs, usually for free.

And, in a disaster, it makes you part of the solution, not part of the problem.

Share

The “Immutable Laws” revisited

Once upon a time, somebody at Microsoft wrote an article on the “10 Immutable Laws of Security.”  (I can’t recall how long ago: it’s now listed as “Archived content.”  And I like the disclaimer that “No warranty is made as to technical accuracy.”)  Now these “laws” are all true, and they are helpful reminders.  But I’m not sure they deserve the iconic status they have achieved.

In terms of significance to security, you have to remember that security depends on situation.  As it is frequently put, one (security) size does not fit all.  Therefore, these laws (which lean heavily towards malware) may not be the most important for all users (or companies).

In terms of coverage, there is little or nothing about management, risk management, classification, continuity, secure development, architecture, telecom and networking, personnel, incidents, or a whole host of other topics.

As a quick recap, the laws are:

Law #1: If a bad guy can persuade you to run his program on your computer, it’s not your computer anymore

(Avoid malware.)

Law #2: If a bad guy can alter the operating system on your computer, it’s not your computer anymore

(Avoid malware, same as #1.)

Law #3: If a bad guy has unrestricted physical access to your computer, it’s not your computer anymore

(Quite true, and often ignored.  As I tell my students, I don’t care what technical protections you put on your systems, if I have physical access, I’ve got you.)

Law #4: If you allow a bad guy to upload programs to your website, it’s not your website any more

(Sort of a mix of access control and avoiding malware, same as #1.)

Law #5: Weak passwords trump strong security

(You’d think this relates to access control, like #4, but the more important point is that you need to view security holistically.  Security is like a bridge, not a road.  A road halfway is still partly useful.  A bridge half-built is a joke.  In security, any shortcoming can void the whole system.)

Law #6: A computer is only as secure as the administrator is trustworthy

(OK, there’s a little bit about people.  But it’s not just administrators.  Security is a people problem: never forget that.)

Law #7: Encrypted data is only as secure as the decryption key

(This is known as “Kerckhoffs’ Law.”  It’s been known for 130 years.  More significantly, it is a special case of the fact that security-by-obscurity [SBO] does not work.)

Law #8: An out of date virus scanner is only marginally better than no virus scanner at all

(I’m not sure that I’d even go along with “marginally.”  As a malware expert, I frequently run without a virus scanner: a lot of scanners [including MSE] impede my work.  But, if I were worried, I’d never rely on an out-of-date scanner, or one that I considered questionable in terms of accuracy [and there are lots of those around].)

Law #9: Absolute anonymity isn’t practical, in real life or on the Web

(True.  But risk management is a little more complex than that.)

Law #10: Technology is not a panacea

(Or, as (ISC)2 says, security transcends technology.  And, as #5 implies, management is the basic foundation of security, not any specific technology.)

Share

A recent flight …

Security wanted to open up my suitcase and look at the bag of chargers, USB sticks, etc, and was concerned about the laser pointers.  He decided they were pens, and I didn’t disabuse him of the notion.  Why disturb the tranquility of his ignorance?

Share

Calm acceptance vs self-help

As an emergency services volunteer, I’ve been looking for stories about how the Japanese have been handling displacements, evacuations, and those left homeless following the quake and tsunami.  Oddly, despite having all kinds of video and pictures coming from various areas of Japan, these stories seem to be missing (possibly pushed out of the news-stream by boats running over cars, and a steaming reactor).

Yesterday I started to see a few, some noting that the Japanese culture of calm acceptance was contributing to orderly lines and a lack of panic.  (And then saw some reports that a lack of action by the government was starting to wear on the calm acceptance.  Six days after the quake, food and water aren’t getting through to areas which are only as far apart as Ottawa is from Toronto, or Boston from Baltimore.)

So I was intrigued to find, this morning, this report of someone running counter to his own culture.

(And, once again, I’ll take the opportunity to promote the idea that all security professionals should consider getting training as emergency services volunteers.  You’ll know what to do in or for an emergency, you’ll be a help intead of a drain, and, in the meantime, you can probably apply it to BCP, and get CPE credits for your training.)

Share

Great new security tech, or fraud?

While at CanSecWest, I was noting a news story about how somebody had, yet again, defrauded the US government and military by selling them a terribly sophisticated computer algorithm that promised to find secret information about enemies and/or terrorists, but actually didn’t work.  I suspect that this will be a complex case, since the vendor will undoubtedly claim that his work is so sophisticated and complicated that it does work, it’s just that the users didn’t understand it.

In view of this, I found it really interesting to note a very similar case, just a few days later.  Computerized Voice Stress Analyzers (CVSAs) have been promoted and sold for a least 25 years now.  This despite the fact that, four years ago, the U.S. Department of Justice did a study and concluded that “VSA programs show poor validity -neither program efficiently determined who was being deceptive about recent drug use. The programs were not able to detect deception at a rate any better than chance … The data also suggest poor reliability for both VSA products when we compared expert and novice interpretations of the output.”

In a sense the CVSA case is much worse, because, since it is a private company selling to private companies, there is nobody to say that these people are a) wasting money, and b) making poor hiring decisions based on what is essentially a coin flip.

Share

Bring on the cyberwar

There is something special about Berlin. Just a feeling that can’t be fully explained, that the cold and snowy weather enhances well. But I also can’t help thinking about the Len Deighton cold-war-espionage books, checkpoint Charlie, east and west clashing in this city that was like an explosive tip of a gun powder barrel.

When I grew up, Sting sang “I hope the Russians love their children too” and what he meant was love them enough to not annihilate the entire planet. War was serious, and war between world powers was scary. Remember War Games? You’d think people will be afraid of Kevin Mitnick’s hacking skills, but what they were more afraid of was him starting world war III that would potentially wipe out hundreds of millions of people.

So I must admit I’m slightly amused by the threats of ‘cyberwar’. Lets assume for a minute John Lennon was wrong and there will never be ‘peace on earth’. Lets assume that whether it’s because of testosterone, ego, or some other reason taught in psychology 101, nations will continue to fight each other. If that’s the case, what better way to do that than on the Internet? Have them hack each other Ad Nauseam; bring down computers or networks, plant Trojan Horses and steal sensitive data. Assuming the current superpowers are China and the US, isn’t cyberwar the perfect way to ventilate mutual aggression without human casualties?

Of course, there’s a worse case scenario where that stops being funny: if cyberwar can be used to shut down critical infrastructure, people will get killed. But that doesn’t seem to be the direction this “war” is going. Nations fighting on the Internet? I say bring it on.

On a related note, check out Richard Stiennon’s new book about Cyberwar. And if you are in DC, go hear him speak on Thursday about Google Aurora, Stuxnet, and the wikileaks DoS attacks. Really fascinating stuff.

Share

Close the Washington Monument

Bruce Schneier suggests closing the Washington Monument:

An empty Washington Monument would serve as a constant reminder to those on Capitol Hill that they are afraid of the terrorists and what they could do. They’re afraid that by speaking honestly about the impossibility of attaining absolute security or the inevitability of terrorism — or that some American ideals are worth maintaining even in the face of adversity — they will be branded as “soft on terror.”

Damn right.

Share

Social Engineering and Body Language

Social engineering is defined by Wikipedia as “the act of manipulating people into performing actions or divulging confidential information, rather than by breaking in or using technical cracking techniques; essentially a fancier, more technical way of lying. While similar to a confidence trick or simple fraud, the term typically applies to trickery or deception for the purpose of information gathering, fraud, or computer system access; in most cases the attacker never comes face-to-face with the victim.”

Over the years I’ve done my fair share of social engineering, and the one thing that I have always found to come in handy is being able to read people’s body language. Being able to notice when someone is pacifying themselves, when you ask certain questions, and knowing where to hone in on for example, has helped me countless times in the past. Being able to notice the little things like when people are extremely nervous when you mention things like “Well, I’m not too sure Mr Jones, you manager would be too happy about me not being able to gain access to this room, as he’s paying me to have a look around in your data hall.” When they’re blatantly telling you, that they can’t allow you access under company policy, etc, etc.

I would encourage anyone that performs penetration testing that includes social engineering exercises, to really take the time to read up on body language and how you can make it work for you, it will help your social engineering skills, and this will also help you to help your clients.

There are countless books on this topic that you can get from most decent bookstores to help you along your way, and the good news is that some of these are really not expensive at all.

Another thing that you may want to look into is reading micro expressions, although I would recommend that you start with learning basic body language first, and then progressing on to micro expressions.

Share

Reflections on Trusting Trust goes hardware

A recent Scientific American article does point out that is is getting increasingly difficult to keep our Trusted Computing Base sufficiently small.

For further information on this scenario, see: http://www.imdb.com/title/tt0436339/  [1]

We actually discussed this in the early days of virus research, and sporadically since.  The random aspect (see Dell problems with bad chips) (the stories about malware on the boards is overblown, since the malware was simply stored in unused memory, rather than being in the BIOS or other boot ROM) is definitely a problem, but a deliberate attack is problematic.  The issue lies with hundreds of thousands of hobbyists (as well as some of the hackers) who poke and prod at everything.  True, the chance of discovering the attack is random, but so is the chance of keeping the attack undetected.  It isn’t something that an attacker could rely upon.

Yes, these days there are thousands of components, being manufactured by hundreds of vendors.  However, note various factors that need to be considered.

First of all, somebody has to make it.  Most major chips, like CPUs, are a combined effort.  Nobody would be able to make and manufacture a major chip all by themselves.  And, in these days of tight margins and using every available scrap of chip “real estate,” someone would be bound to notice a section of the chip labeled “this space intentionally left blank.”  The more people who are involved, the more likely someone is going to spill the beans, at the very least about an anomaly on the chip, whether or not they knew what it did.  (Once the word is out that there is an anomaly, the lifespan of that secret is probably about three weeks.)

Secondly, there is the issue of the payload.  What can you make it do?  Remember, we are talking components, here.  This means that, in order to make it do anything, you are generally going to have to rely on whatever else is in the device or system in which your chip has been embedded.  You cannot assume that you will have access to communications, memory, disk space, or pretty much anything else, unless you are on the CPU.  Even if you are on the CPU, you are going to be limited.  Do you know what you are?  Are you a computer? Smartphone?  iPod?  (If the last, you are out of luck, unless you want to try and drive the user slowly insane by refusing to play anything except Barry Manilow.)  If you are a computer, do you know what operating system you are running?  Do you know the format of any disk connected to you?  The more you have to know how to deal with, the more programming has to be built into you, and remember that real estate limitation.  Even if all you are going to do is shut down, you have to have access to communications, and you have to a) be able to watch all the traffic, and b) watch all the traffic, without degrading performance while doing so.  (OK, true, it could just be a timer.  That doesn’t allow the attacker a lot of control.)

Next, you have to get people to use your chips.  That means that your chips have to be as cheap as, or cheaper than, the competition.  And remember, you have to use up chip real estate in order to have your payload on the chip.  That means that, for every 1% of chip space you use up for your programming, you lose 1% of manufacturing capacity.  So you have to have deep pockets to fund this.  Your chip also has to be at least as capable as the competition.  It also has to be as reliable as the competition.  You have to test that the payload you’ve put in place does not adversely affect performance, until you tell it to.  And you have to test it in a variety of situations and applications.  All the while making sure nobody finds out your little secret.

Next, you have to trigger your attack.  The trigger can’t be something that could just happen randomly.  And remember, traffic on the Internet, particularly with people streaming videos out there, can be pretty random.  Also remember that there are hundreds of thousands of kids out there with nothing better to do than try to use their computers, smartphones, music players, radio controlled cars, and blenders in exactly the way they aren’t supposed to.  And several thousand who, as soon as something odd happens, start trying to figure out why.

Bad hardware definitely is a threat.  But the largest part of that threat is simply the fact that cheap manufacturers are taking shortcuts and building unreliable components.  If I was an attacker, I would definitely be able to find easier ways to mess up the infrastructure than by trying to create attack chips.

[1] Get it some night when you can borrow it, for free, from your local library DVD collection.  On an evening when you don’t want to think too much.  Or at all.  WARNING: contains jokes that six year olds, and most guys, find funny.

Share

Sound good?

By the way, in non-Sonne-erous G8/20 news, our government(s) have spent a billions dollars on security for a couple of days of meetings.  Even given the degraded value of the American billion, that’s a lot of money.

Part of it was used to buy sound cannons.  (The police don’t like you saying that: they prefer the term “long range sonic control devices.”)  These sound cannons generate noise at 130 decibels, which the civil liberties folks are concerned will damage human hearing.

That’s the same level of noise a vuvuzela makes.

So, look, why didn’t we save the billion dollars, go down to Canadian Tire, and, for a hundred bucks (possibly in Canadian Tire money) equip the entire riot squad with vuvuzelas?

Share

And the winners of the oldest incident contest are…

Open Security Foundation’s DataLossDB has announced the winners of oldest incident contest.

One of the oldest documented issue is TRW incident from 1984, when the database of credit history of 90 million American citizen was breached.
Link here.

Update: The winner is an incident from August 1953, when SSN’s were lost.

Share

The oldest vulnerability is known – let’s find the oldest data loss incident

The oldest documented vulnerability in computer security world is password file disclosure vulnerability from 1965, found by Mr. Ryan Russell.

Open Security Foundation – an organization behind OSVDB and DataLossDB has launched a competition to find the oldest documented data loss incident.

The last day to make a submission is next Friday – 15th May.
The link is easy to remember – datalossdb.org/oldest_incidents_contest.

Share

Give me your fingerprints, I’ll sell you a mobile phone

There will be a new national register of mobile phone users in Mexico.

Under a new law published on Monday and due to be in force in April, mobile phone companies will have a year to build up a database of their clients, complete with fingerprints. The idea would be to match calls and messages to the phones’ owners.

(underlining added)

Mexico has a very strong culture of using prepaid phones.

Share

Snow and security

I live in Vancouver.  Despite the fact that this is in Canada, we do not live in igloos, nor do we have to get around by dogsled.  Most of the time.  At the moment, we are having an unusual spell of snowy weather.  It’s here, for one thing.  It’s been here for more than two weeks, for another.  It’s also much deeper than usual: more than 30 cm (a foot, US) is on level areas in many places, and the piles where the snow has been shovelled are getting pretty high.

That’s not unusual in many places, but in Vancouver it is practically unheard of.

The weather in Vancouver is very similar to the weather in Seattle, so Seattle is snowed in, too.  And I was discussing this with a much younger friend in that area.  I was complaining that nobody around here was shovelling their sidewalks.  He was complaining that people in his area were.

Those of you who live in the deep snow areas will probably not understand his complaint.  You see, in this region, when we do get snow, the temperatures tend to hover around the freezing point.  So, some days the snow will start to melt.  And at nights, or on other days, it freezes again.  So if you don’t shovel the sidewalk properly, you create a bit of skating rink.

The key is to shovel properly.  There are a few factors involved in this, but the primary one is to shovel right to the edge of the sidewalk.  If you can see even one blade of grass as the edge, then, when the snow starts to melt, the meltwater does into the ground.  Leave even a centimetre of snow on the edge of the walk, and the meltwater runs all over the sidewalk, and, when it freezes, you’ve got the slickest, most treacherous footing imaginable.

Which brings me to security.  For a number of years, many of us in the field have been faced with the extreme frustration of preparing security architectures, designs, and plans to fit the particular business and environment in which we find ourselves.  Finely tuned, appropriate to the assets and risks involved, and complete.  Only to have some bean-counter come along and say that this is great, but a bit too expensive: couldn’t we get half the security for half the cost.

The answer, as we know, is no.  Security is not something you buy by the kilogram.  Security is not like a blanket, where the more you have, the warmer you are: it’s like a roof or tent, where you’ve either got one up or not.  Security is not like a road, where, no matter how long it is, it is of some use: it’s like a bridge, where, if it’s even a little bit too short it is no use at all.

So, here’s another illustation for you.  Security is like clearing the snow in Vancouver.  Do it right, out to the very edge, and you’re golden.  Do it quick and dirty and cheap, with one shovel width down the middle, and you’re creating a problem for yourself.  And others.

Share

Disasters cost money?

A BBC story notes that a German re-insurance concern has raised the issue of increasing natural disasters, and a possible tie to climate change/global warming.

Now that the money/finance people are getting scared, will we finally do something?

Now that the money/finance people are getting scared, will we finally do something about business continuity and disaster planning?

(Likely answer: nah.)

Share