Posts byDavid Harley

David Harley has worked in security since 1986, notably as security analyst for a major medical research charity, then as manager of the NHS Threat Assessment Centre. Since 2006 he has worked as an independent consultant. He also holds the position of Senior Research Fellow at ESET. His books include Viruses Revealed and the AVIEN Anti-Malware Defense Guide for the Enterprise. He is a frequent speaker at major security conferences, and a prolific writer of blogs and other articles. If he had any free time, he would probably spend most of it playing the guitar.

Hiding in Plain Sight

“Charity, dear Miss Prism, charity! None of us are perfect. I myself am peculiarly susceptible to draughts.” (Dr. Chasuble, in The Importance of Being Earnest)

Not long ago, I was – inevitably – asked a number of questions about NSA and Prism, one of which was “Can you protect yourself against it somehow?”

To which I responded: “I suspect that effective self-concealment from SIGINT functionality like ECHELON is probably not only out of reach of the average person, but might also actually attract more active investigation.”

And it seems I wasn’t far wrong. Subsequent revelations indicate that – as Lisa Vaas of Sophos (among many others) observed – Using Tor and other means to hide your location piques NSA’s interest in you. That works because people who hide their location will be assumed to be non-Americans, and those of us outside the US are considered fair game even if we’re communicating with Americans. Still, there’s a sufficiency of loopholes that make USians talking to Usians almost equally justifiable as targets.

In particular, it turns out that “all communications that are enciphered or reasonably believed to contain secret meaning” are also fair game, even if they’re known to be domestic. But the grounds for hanging onto harvested information apparently include communications containing “significant foreign intelligence information”, “evidence of a crime”, “technical data base information” (such as encrypted communications), or “information pertaining to a threat of serious harm to life or property”. You might wonder how many electronic communications aren’t encrypted these days at some stage during their transmission… But I suppose it doesn’t really matter whether the NSA is exceeding its brief by paying too much attention to too many all-American transactions, since apparently the UK’s GCHQ is tapping every fibre-optic cable it can lay hands on and sharing its data with our Transatlantic cousins.

It might seem strange that the security community isn’t getting more worked up about all this, but that’s probably because none of us really believe that government and law enforcement agencies worldwide aren’t carrying out information gathering and analysis to the fullest extent that their resources permit. The problem with establishing a balance between the right to privacy of the individual and the right to security of the majority is not really about the gathering of information. Not that there’s much likelihood of the forty-niners (I’m thinking Gold Rush, not football) of the world’s intelligence agencies giving up panning the gravel beds of the world’s data streams.

What really matters is (a) what they do with the nuggets and (b) what they do with stuff that isn’t nuggets. It would be nice to think that where legislation limiting the State’s right to surveillance fails because of the sheer volume of data, legislation limiting the use that can be made of information gathered collaterally would at least partly compensate. However, it’s none too clear that this is the case right now in the Five Eyes community, far less among states with less of a tradition of observing democratic and libertarian principles. In the meantime, if you’re at all concerned about the privacy of your data, you might want to consider John Leyden’s suggestion of a combination of carrier pigeon and one-time pad. Bearing in mind that if an out-of-band communication does come to the attention of the authorities, it’s likely to attract attention rather than deflect it. Which is where I came in.

“The good ended happily, and the bad unhappily. That is what fiction means.” (Miss Prism, in The Importance of Being Earnest)

The death of AV. Yet again.

And in other news, Gunter Ollman joins in the debate as to whether Imperva’s quasi-testing is worth citing (just about) and, with more enthusiasm, whether AV is worth paying for or even still breathing. If you haven’t come across Ollman’s writings on the topic before, it won’t surprise you that the answer is no. If you haven’t, he’s thoughtfully included several other links to articles where he’s given us the benefit of his opinions.

If it’s free, never ever bothers me with popups, and I never need to know it’s there, then it’s not worth the effort uninstalling it and I guess it can stay…

Ollman notes:

In particular there was great annoyance that a security vendor (representing an alternative technology) used VirusTotal coverage as their basis for whether or not new malware could be detected – claiming that initial detection was only 5%.

However, he doesn’t trouble himself to explain why the anti-malware industry (and VirusTotal itself) are so annoyed, or to comment on Imperva’s squirming following those criticisms. Nor does he risk exposing any methodology of his own to similar criticism, when he claims that:

desktop antivirus detection typically hovers at 1-2% … For newly minted malware that is designed to target corporate victims, the rate is pretty much 0% and can remain that way for hundreds of days after the malware has been released in to the wild.

Apparently he knows this from his own experience, so there’s no need to justify the percentages. And by way of distraction from this sleight of hand, he introduces ‘a hunchbacked Igor’ whom he visualizes ‘bolting on an iron plate for reinforcement to the Frankenstein corpse of each antivirus product as he tries to keep it alive for just a little bit longer…’ Amusing enough, I suppose, at any rate if you don’t know how hard those non-stereotypes in real anti-malware labs work at generating proactive detections for malware we haven’t seen yet and multi-layered protection. But this is about cheap laughs at the expense of an entire industry sector that Ollman regards as reaping profits that should be going to IOActive. Consider this little exchange on Twitter.

@virusbtn
Imperva’s research on desktop anti-virus has stirred a fierce debate. @gollmann: bit.ly/XE76eS @dharleyatESET: bit.ly/13e1TJW

@gollmann
@virusbtn @dharleyatESET I don’t know about “fierce”. It’s like prodding roadkill with a stick.

What are we, 12 years old? Fortunately, other tweeters seem to be seeing through this juvenilia.

@jarnomn
@gollmann @virusbtn @dharleyatESET Again just methaphors and no data. This conversation is like trainwreck in slow motion :)

The comments to the blog are also notable for taking a more balanced view: Jarno succinctly points to VirusTotal’s own view on whether its service is a realistic guide to detection performance, Kurt Wismer puts his finger unerringly on the likely bias of Ollman”s nebulous methodology, and Jay suggests that Ollman lives in a slightly different (ideal) world (though he puts a little more politely than that). But no doubt the usual crop of AV haters, Microsoft haters, Mac and Linux advocates, scammers, spammers and downright barmpots will turn up sooner or later.

There is, in fact, a rational debate to be held on whether AV – certainly raw AV with no multi-layering bells and whistles – should be on the point of extinction. The rate of detection for specialized, targeted malware like Stuxnet is indeed very low, with all-too-well-known instances of low-distribution but high-profile malware lying around undetected for years. (It helps if such malware is aimed at parts of the world where most commercial AV cannot legally reach.) And Gunter Ollman is quite capable of contributing a great deal of expertise and experience to it. But right now, it seems to me that he and Imperva’s Tal Be’ery are, for all their glee at the presumed death of anti-virus, a pair of petulantly twittering budgies trying to pass themselves off as vultures.

David Harley
AVIEN/Small Blue-Green World/Mac Virus/Anti-Malware Testing
ESET Senior Research Fellow

Anti-Virus, now with added Michelangelo

Apparently it’s all our fault. Again. Not only is anti-virus useless, but we’re responsible for the evolution and dramatic increased volume of malware. According to something I read today “If it wasn’t for the security industry the malware that was written back in the 90’s might still be working today.”

I guess that’s not as dumb as it sounds: we have forced the malware industry to evolve (and vice versa). But you could just as easily say:

“The medical profession is responsible for the evolution and propagation of disease. If it wasn’t for the pharmaceutical industry illnesses that killed people X years ago might still be killing people today.”

And to an extent, it would be true. Some conditions have all but disappeared, at any rate in regions where advanced medical technology is commonplace, but other harder-to-treat conditions have appeared, or at least have achieved recognition.

I can think of plenty of reasons for being less than enthusiastic about the static-signature/malcode-blacklisting approach to malware deterrence, though I get tired of pointing out that commercial AV has moved a long way on from that in the last couple of decades. Even so, if pharmaceutical companies had to generate vaccines at the rate that AV labs have to generate detections (even highly generic detections) we’d all have arms like pincushions.

However, there are clear differences between ‘people’ healthcare and PC therapeutics. Most of us can’t trust ourselves as computer users (or the companies that sell and maintain operating systems and applications) to maintain a sufficiently hygienic environment to eliminate the need to ‘vaccinate’. It’s not that we’re all equally vulnerable to every one of the tens or hundreds of thousands of malicious samples that are seen by AV labs every day. Rather, it’s the fact that a tailored assessment of which malware is a likely problem for each individual system, regardless of provenance, region, and the age of the malware, is just too difficult. It’s kind of like living at the North Pole and taking prophylactic measures in case of Dengue fever, trypanosomiasis and malaria.

Fortunately, new or variant diseases tend not to proliferate at the same rate that malware variants do, and vaccines are not the only way of improving health. In fact, lots of conditions are mitigated by better hygiene, a better standard of living, health-conscious lifestyles and all sorts of more-or-less generic factors. There’s probably a moral there: commonsense computing practices and vitamin supplements – I mean, patches and updates – do reduce exposure to malicious code. It’s worth remembering, though, that even if AV had never caught on, evolving OS and application technologies would probably have reduced our susceptibility to antique boot sector viruses, macro viruses, and DOS .EXE infectors. Is it really likely that they wouldn’t have been replaced by a whole load of alternative malicious technologies?

David Harley CITP FBCS CISSP
ESET Senior Research Fellow

Passwording: checklists versus heuristics

The trouble with lists of ‘Top Umpteen’ most-used passwords like Mark Burnett’s is that they don’t really teach the everyday user anything. (Yes, I’m another of those sad people like Rob Slade who believe that education and reducing Security unawareness is actually worth doing.)

Since I’ve quoted Burnett’s top 500 and one or two other sources from time to time in blogs here and there, I’ve noticed that those articles tend to pick up a fair amount of media attention, and after the Yahoo! debacle I noticed several journalists producing lists of their own. But they’re missing the point, at least in part.

Not using (say) the top 25 over-used passwords will reduce the risk for accounts that are administered with a ‘three strikes and you’re blocked’ approach to blocking password guessing, but where authentication is less strict, 25 may not be enough. Heck, 10,000 may not be enough. At any rate, if an end user is expected to check that they aren’t using a common password, 10,000 is a pretty big checklist, and still doesn’t provide real protection against a determined dictionary attack. It’s the difference between static signature detection and heuristics: it might be useful to know that ‘password’ is a particularly bad choice because everyone uses it, but which of these approaches is more helpful?

(1)
Don’t use ‘a’
Don’t use ‘aa’
Don’t use ‘aaa’

Don’t use ‘aaaaaaaaaaaaaaaaaaaaaaa’
Don’t use ‘b’
Don’t use ‘bb’

(2) Don’t use any password consisting of a single character repeated N times

See A Torrent of Abuse for a flippant attempt at approach (2) implemented through parody.
But then, any password is only as good as the service to which it gives access: it doesn’t matter if the provider is incapable of providing competent security: Lessons in website security anti-patterns by Tesco. And I have some sympathy with the view that if you can find a decent password manager it saves you a lot of thinking and reduces the temptation to re-use passwords and risk a cascade of breaches when one of your providers slips up.

David Harley