Flame on!

I have been reading about the new Flame (aka Flamer, aka sKyWIper) “supervirus.”

[AAaaaarrrrrrggggghhhh!!!!!!!!  Sorry.  I will try and keep the screaming, in my "outside voice," to a minimum.]

From the Telegraph:

This “virus” [1] is “20 times more powerful” than any other!  [Why?  Because it has 20 times more code?  Because it is running on 20 times more computers?  (It isn't.  If you aren't a sysadmin in the Middle East you basically don't have to worry.)  Because the computers it is running on are 20 times more powerful?  This claim is pointless and ridiculous.]

[I had it right the first time.  The file that is being examined is 20 megabytes.  Sorry, I'm from the old days.  Anybody who needs 20 megs to build a piece of malware isn't a genius.  Tight code is *much* more impressive.  This is just sloppy.]

It “could only have been created by a state.”  [What have you got against those of us who live in provinces?]

“Flame can gather data files, remotely change settings on computers, turn on computer microphones to record conversations, take screen shots and copy instant messaging chats.”  [So?  We had RATs that could do that at least a decade ago.]

“… a Russian security firm that specialises in targeting malicious computer code … made the 20 megabyte virus available to other researchers yesterday claiming it did not fully understand its scope and said its code was 100 times the size of the most malicious software.”  [I rather doubt they made the claim that they didn't understand it.  It would take time to plow through 20 megs of code, so it makes sense to send it around the AV community.  But I still say these "size of code" and "most malicious" statements are useless, to say the least.]

It was “released five years ago and had infected machines in Iran, Israel, Sudan, Syria, Lebanon, Saudi Arabia and Egypt.”  [Five years?  Good grief!  This thing is a pretty wimpy virus!  (Or self-limiting in some way.)  Even in the days of BSIs and sneakernet you could spread something around the world in half a year at most.]

“If Flame went on undiscovered for five years, the only logical conclusion is that there are other operations ongoing that we don’t know about.”  [Yeah.  Like "not reproducing."]

“The file, which infects Microsoft Windows computers, has five encryption algorithms,”  [Gosh!  The best we could do before was a couple of dozen!]  “exotic data storage formats”  [Like "not plain text."]  “and the ability to steal documents, spy on computer users and more.”  [Yawn.]

“Components enable those behind it, who use a network of rapidly-shifting “command and control” servers to direct the virus …”  [Gee!  You mean like a botnet or something?]

 

Sorry.  Yes, I do know that this is supposed to be (and probably is) state-sponsored, and purposefully written to attack specific targets and evade detection.  I get it.  It will be (marginally) interesting to see what they pull out of the code over the next few years.  It’s even kind of impressive that someone built a RAT that went undetected for that long, even though it was specifically built to hide and move slowly.

But all this “supervirus” nonsense is giving me pains.

 

[1] First off, everybody is calling it a “virus.”  But many reports say they don’t know how it got where it was found.  Duh!  If it’s a virus, that’s kind of the first issue, isn’t it?

Share

Ad-Aware

I’ve used Ad-Aware in the past, and had it installed on my machine.  Today it popped up and told me it was out of date.  So, at their suggestion, I updated to the free version, which is now, apparently, called Ad-Aware Free Antivirus+.  It provides for real-time scanning, Web browsing protection, download protection, email protection, and other functions.  Including “superfast” antivirus scanning.  I installed it.

And almost immediately removed it from the machine.

First off, my machine bogged down to an unusable state.  The keyboard and mouse froze frequently, and many programs (including Ad-Aware) were unresponsive for much of the time.  Web browsing became ludicrous.

There are some settings in the application.  For my purposes (as a malware researcher) they were inadequate.  There is an “ignore” list, but I was completely unable to get the program to “ignore” my malware zoo, even after repeated efforts.  (The interface for that function is also bizarrely complex.)  However, I’m kind of a non-typical user.  However, the other options would be of little use to anyone.  For the most part they were of the “on or off” level, and provide almost no granularity.  That makes them simple to use, but useless.

I’ve never used Ad-Aware much, but it’s disappointing to see yet another relatively decent tool “improved” into non-utility.

Share

REVIEW: “Dark Market: CyberThieves, CyberCops, and You”, Misha Glenny

BKDRKMKT.RVW 20120201

“Dark Market: CyberThieves, CyberCops, and You”, Misha Glenny, 2011,
978-0-88784-239-9, C$29.95
%A   Misha Glenny
%C   Suite 801, 110 Spadina Ave, Toronto, ON Canada  M5V 2K4
%D   2011
%G   978-0-88784-239-9 0-88784-239-9
%I   House of Anansi Press Ltd.
%O   C$29.95 416-363-4343 fax 416-363-1017 www.anansi.ca
%O  http://www.amazon.com/exec/obidos/ASIN/0887842399/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/0887842399/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0887842399/robsladesin03-20
%O   Audience n Tech 1 Writing 2 (see revfaq.htm for explanation)
%P   296 p.
%T   “Dark Market: CyberThieves, CyberCops, and You”

There is no particular purpose stated for this book, other than the vague promise of the subtitle that this has something to do with bad guys and good guys in cyberspace.  In the prologue, Glenny admits that his “attempts to assess when an interviewee was lying, embellishing or fantasising and when an interviewee was earnestly telling the truth were only partially successful.”  Bear in mind that all good little blackhats know that, if you really want to get in, the easiest thing to attack is the person.  Social engineering (which is simply a fancy way of saying “lying”) is always the most effective tactic.

It’s hard to have confidence in the author’s assessment of security on the Internet when he knows so little of the technology.  A VPN (Virtual Private Network) is said to be a system whereby a group of computers share a single address.  That’s not a VPN (which is a system of network management, and possibly encryption): it’s a description of NAT (Network Address Translation).  True, a VPN can, and fairly often does, use NAT in its operations, but the carelessness is concerning.

This may seem to be pedantic, but it leads to other errors.  For example, Glenny asserts that running a VPN is very difficult, but that encryption is easy, since encryption software is available on the Internet.  While it is true that the software is available, that availability is only part of the battle.  As I keep pointing out to my students, for effective protection with encryption you need to agree on what key to use, and doing that negotiation is a non-trivial task.  Yes, there is asymmetric encryption, but that requires a public key infrastructure (PKI) which is an enormously difficult proposition to get right.  Of the two, I’d rather run a VPN any day.

It is, therefore, not particularly surprising that the author finds that the best way to describe the capabilities of one group of carders was to compare them to the fictional “hacking” crew from “The Girl with the Dragon Tattoo.”  The activities in the novel are not impossible, but the ability to perform them on demand is highly
unlikely.

This lack of background colours his ability to ascertain what is possible or not (in the technical areas), and what is likely (out of what he has been told).  Sticking strictly with media reports and indictment documents, Glenny does a good job, and those parts of the book are interesting and enjoyable.  The author does let his taste for mystery get the better of him: even the straight reportage parts of the book are often confusing in terms of who did what, and who actually is what.

Like Dan Verton (cf BKHCKDRY.RVW) and Suelette Dreyfus (cf. BKNDRGND.RVW) before him, Glenny is trying to give us the “inside story” of the blackhat community.  He should have read Taylor’s “Hackers” (cf BKHAKERS.RVW) first, to get a better idea of the territory.  He does a somewhat better job than Dreyfus and Verton did, since he is wise enough to seek out law enforcement accounts (possibly after reading Stiennon’s “Surviving Cyberwar,” cf. BKSRCYWR.RVW).

Overall, this work is a fairly reasonable updating of Levy’s “Hackers” (cf. BKHACKRS.RVW) of almost three decades ago.  The rise of the financial motivation and the specialization of modern fraudulent blackhat activity are well presented.  There is something of a holdover in still portraying these crooks as evil genii, but, in the main, it is a decent picture of reality, although it provides nothing new.

copyright, Robert M. Slade   2012    BKDRKMKT.RVW 20120201

Share

LTE Cloud Security

LTE.  Even the name is complex: Long-Term Evolution of Evolved Universal Terrestrial Radio Access Network

All LTE phones (UE, User Equipment) are running servers.  Multiple servers.  (And almost all are unsecured at the moment.)

Because of the proliferation of protocols (GSM, GPRS, CDMA, additional 3 and 4G, and now LTE), the overall complexity of the mobile/cell cloud is growing.

LTE itself is fairly complex.  The Protocol Reference Model contains at least the GERAN User Plane, UTRAN User Plane, and E-UTRAN User Plane (all with multiple components) as well as the control plane.  A simplified model of a connection request involves at least nine messages involving six entities, with two more sitting on the sides.  The transport layer, SCTP, has a four-way, rather than two-way, handshake.  (Hence the need for all those servers.)  Basically, though, LTE is IP, but a fairly complex set of additional protocols, as opposed to the old PSTN.  The old public telephone network was a walled garden which few understood.  Just about all the active blackhats today understand IP, and it’s open.  It’s protected by Diameter, but even the Diameter implementation was loopholes.  It has a tunnelling protocol, GTP (GPRS Tunnelling Protocol), but, like very many tunnelling protocols, GTP does not provide confidentiality or integrity protection.

Everybody wants to the extra speed, functions, interconnection abilities, and apps.  But all the functionality means a much larger attack surface.  The total infrastructure involved in LTE is more complex.  Maybe nobody can know it all.  But they can know enough to start messing with it.  From a simple DoS to DDoS, false billing, disclosure of data, malware, botnets of the UEs, spam, SMS trojans, even run down batteries, you name it.

As with VoIP before it, we are rolling our known data vulnerabilities, and known voice/telco/PBX vulnerabilities, into one big insecurity.

Share

Michelangelo date

OK, having now had this conversation twice, I’ve gone back to the true source of all wisdom on all things viral, “Viruses Revealed.”  I got it off my shelf, of course, but some helpful vxer (who probably thought he was going to harm our sales) posted it on the net, and saved David and I the bother.  (Remember, this guy is a vxer, so that page may not be entirely safe.)

Michelangelo is covered between pages 357 and 361, which is slightly over halfway through the book.  However, since I guess he’s missed out the index and stuff, it turns out to be at about the 3/4 mark on the page he’s created.

Anyway, Michelangelo checks the date via Interrupt 1Ah.  many people did not understand the difference between the MS-DOS clock and the system clock read by Interrupt 1Ah. The MS-DOS DATE command did not always alter the system clock. Network-connected machines often have “time server” functions so that the date is reset to conform to the network. The year 1992 was a leap year, and many clocks did not deal with it properly. Thus, for many computers, 6th March came on Thursday, not Friday.

Share

Michelangelo

Graham Cluley, of Sophos and Naked Security, posted some reminiscences of the Michelangelo virus.  It brought back some memories and he’s told the story well.

I hate to argue with Graham, but, first off, I have to note that the twentieth anniversary of Micelangelo is not tomorrow (March 6, 2012), but today, March 5.  That’s because 1992 was, as this year is, a leap year.  Yes, Michelangelo was timed to go off on March 6th every year, but, due to a shortcut in the code (and bugs in normal comptuer software), it neglected to factor in leap years.  Therefore, in 1992 many copies went off a day early, on March 5th.

March 5th, 1992, was a rather busy day for me.  I was attending a seminar, but kept getting called out to answer media enquiries.

And then there was the fact that, after all that work and information submitted to the media in advance, and creating copies of Michelangelo on a 3 1/2″ disk (it would normally only infect 5 1/4″s) so I could test it on a safe machine (and then having to recreate the disk when I accidentally triggered the virus), it wasn’t me who got my picture in the paper.  No, it was my baby brother, who a) didn’t believe in the virus, but b) finally, at literally the eleventh hour (11 pm on March 4th) decided to scan his own computer (with a scanner I had given to him), and, when he found he was infected, raised the alarm with his church, and scanned their computers as well.  (Must have been pretty close to midnight, and zero hour, by that time.)  That’s a nice human interest story so he got his picture in the paper.  (Not that I’m bitter, mind you.)

I don’t quite agree with Graham as to the infection rates.  I do know that, since this was the first time we (as the nascent antivirus community) managed to get the attention of the media in advance, there were a great many significant infections that were cleaned off in time, before the trigger date.  I recall notices of thousands of machines cleaned off in various institutions.  But, in a sense, we were victims of our own success.  Having got the word out in advance, by the trigger date most of the infections had been cleaned up.  So, yes, the media saw it as hype on our part.  And then there was the fact that a lot of people had no idea when they got hit.  I was told, by several people, “no, we didn’t get Michelangelo.  But, you know, it’s strange: our computer had a disk failure on that date …”  That was how Michelangelo appeared, when it triggered.

I note that one of the comments wished that we could find out who created the virus.  There is strong evidence that it was created in Taiwan.  And, in response to a posting that I did at the time, I received a message from someone, from Taiwan, who complained that it shouldn’t be called “Michelangelo,” since the real name was “Stoned 3.”  I’ve always felt that only the person who wrote that variant would have been that upset about the naming …

Share

The “Man in the Browser” attack

Gizmodo reports:

New “Man in the Browser” Attack Bypasses Banks’ Two-Factor Authentication Systems

Except there is nothing new about this attack. OWASP documented it in 2007 and it was widely known that malware writers used it to bypass 2-factor authentication.

More from Gizmodo:

Since this attack has shown that the two-factor system is no longer a viable defense, the banking industry may have to adopt more advanced fraud-detection methods

Given that this has been going on for more than 5 years, it’s obvious that banks already have adopted more advanced fraud detection methods.

So why are they forcing you to carry around tokens and one-time passwords that make it awkward and uncomfortable to use your own money as you wish?

Because with only few exceptions, banks’ security guys are not interested in making your life comfortable. The more you suffer, the more you think they are secure.

Maybe it’s time to ask for accountability? Which of their so-called security features is really for security, and which is for CYA or ‘make-the-regulator-happy’?

Share

The malware problem looks better after the first cup of coffee

Since most of my income comes from a company on the West Coast, I’m used to people assuming that I should be working according to their time zone (PST) rather than my own (GMT). But apparently we’re all wrong.
According to Trustwave’s Global Security Report:

“The number of executables and viruses sent in the early morning hours increased, eventually hitting a maximum between 8 a.m. and 9 a.m. Eastern Standard Time before tapering off throughout the rest of the day. The spike is likely an attempt to catch people as they check emails at the beginning of the day.”

Did I miss something? Has everyone but me moved to the East Coast? I’m not even sure it matters when you receive a malicious executable, unless you don’t get around to opening it until after your security software has been updated to detect it. However, the report also tells us that:

“The time from compromise to detection in most environments is about six months…”

So if evading AV software is really the point, this seems to suggest that all those people who’ve moved to the East Coast are coping even less effectively with their email than I am.

Hold on, though. Maybe this tells something about the blackhat’s time zone, rather than the victim’s? The report doesn’t seem to tell us anything about the geographical origin of the emails that Trustwave has tracked, but it does tells us that apart from the 32.5% of attacks in general that are of unknown origin, the largest percentage (29.6%) come from the Russian Federation. Russia actually covers no less than nine time zones (until a couple of years ago, it was eleven), but perhaps we can assume for the sake of argument that a high percentage of those attackers are in time zones between CET and Moscow Standard (now UTC+4), which applies to most of European Russia. (That assumption allows us to include Romania and the Ukraine.) Perhaps, after a hard morning administering botnets, Eastern European gangsters are best able to find time to fire off a few malicious emails between the afternoon samovar break and early evening cocktails. Convinced? No, me neither.

Actually, there are some interesting statistics in the report. If they’re reliable, some assumptions that we make about geographical distribution, for example, might bear re-examination. But I’d really have to suggest that journalists in search of something new to say about malware examine some of the report’s interpretations with a little more salt and scepticism. I suppose I should be grateful that no-one has noticed yet that according to the report, twice as many attacks originate in the Netherlands as do in China. Just think of the sub-editorial puns that could inspire…

David Harley CITP FBCS CISSP
Small Blue-Green World/AVIEN
ESET Senior Research Fellow

Share

“Zero Day”, Mark Russinovich

BKZERDAY.RVW   20111109

“Zero Day”, Mark Russinovich, 2011, 978-0-312-61246-7, U$24.99/C$28.99
%A Mark Russinovich www.zerodaythebook.com markrussinovich@hotmail.com
%C   175 Fifth Ave., New York, NY   10010
%D   2011
%G   978-0-312-61246-7 0-312-61246-X
%I   St. Martin’s Press/Thomas Dunne Books
%O   U$24.99/C$28.99 212-674-5151 fax 800-288-2131
%O   josephrinaldi@stmartins.com christopherahearn@stmartins.com
%O  http://www.amazon.com/exec/obidos/ASIN/031261246X/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/031261246X/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/031261246X/robsladesin03-20
http://www.amazon.com/gp/mpd/permalink/m3CQBX46DOK0AK/ref=ent_fb_link
%O   Audience n Tech 1 Writing 1 (see revfaq.htm for explanation)
%P   328 p.
%T   “Zero Day”

Mark Russinovich has definitely made his name, in technical terms, with Winternals and Sysinternals.  There is no question that he knows the insides of computers.

What is less certain is whether he knows how to write about it within the strictures of a work of fiction.  The descriptions of digital forensics and computer operation in this work are just as confusing, to the technically knowledgeable, as those we regularly deride from technopeasant authors.  “[T]he first thing Jeff noted was that he couldn’t detect any data on the hard disk.”  (Emphasis in the book.)  Jeff then goes on to find some, and notes that there are “bits and pieces of the original operating system.”  Now there is a considerable difference between not finding any data, and having a damaged filesystem, and Russinovich knows this perfectly well.  Our man Jeff is a digital forensics hacker of the first water, and wouldn’t give a fig if he couldn’t see “the standard C: drive icon.”

Generally, you would think that the reason a technically competent person would write a novel about cyberwar would be in order to inject a little reality into things.  Well, reality seems to be in short supply in this book.

First of all, this is the classic geek daydream of being the ultimate ‘leet hacker in the world.  The Lone Hacker.  Hiyo SysInfo, away!  He has all the tools, and all that smarts, about all aspects of technology.  Sorry, just not possible any more.  This lone hacker image is unrealistic, and the more so because it is not necessary.  There are established groups in the malware community (among others), and these would be working together on a problem of this magnitude.  (Interestingly, these are generally informal groups, not the government/industry structures which the book both derides and relies upon.)

Next, all the female geeks (and there are a lot) are “hot.”  ‘Nuff said.

The “big, bad, new” virus is another staple of the fictional realms which does not exist in reality.  Viruses can be built to reproduce rapidly.  In that case, they get noticed quickly.  Or, they may be created to spread slowly and carefully, in which case they can take a while to be detected, but they also take a long time to get into place.

Anti-malware companies don’t necessarily rely on honeypots (which are usually there to collect information on actual intruders), but they do have bait machines that sit and wait to be infected (by worms) or emulate the activity of users who are willing to click on any link or open any file (for viruses).  Malware can be designed to fail to operate (or even delete itself) under certain conditions, and those conditions could include certain indications of a test environment.  However, the ability to actively avoid machines that might be collecting malware samples would be akin to a form of digital mental telepathy.

Rootkits, as described in the novel, are no different than the stealth technology that viruses have been using for decades.  There are always ways of detecting stealth, and rootkits, and, generally speaking, as soon as you suspect that one might be in operation you start to have ideas about how to find it.

A backup is a copy of data.  When it is restored, it is copied back onto the computer, but there is no need for the backup copy to be destroyed by that process.  Therefore, if a system-restored-from-backup crashes, nothing is lost but time.  You still have the backup, and can try again (this time with more care).  In fact, the first time you have any indication that the system might be corrupted enough to crash, you would probably try to recover the files with an alternate operating system.  (But, yes, I can see how that might not occur to someone who works for Microsoft.)  After all, the most important thing you’ve got on your system is the data, and the data can usually be read on any system, and with a wide variety of programs.  (Data files from a SQL Server database could be retrieved not only with other SQL programs, but with pretty much any relational database.)

Some aspects are realistic.  The precautions taken in communications, with throwaway email addresses and out-of-band messaging, are the type that would be used in those situations.  There is a lot of real technology described in the book.  (Although I was slightly bemused by the preference for CDs for data and file storage: that seems a bit quaint now that everyone is using USB drives.)  The need, in this type of work, for a level of focus that precludes all other distractions, and the boredom of trying step after step and possibility after possibility are real.  The neglect of security and the attendant false confidence that one is immune to attack are all too real.  But in a number of the technical areas the descriptions are careless enough to be completely misleading to those not intimately familiar with the technology and the information security field.  Which is just as bad as not knowing what you are talking about in the first place.

Other forms of technology should have had a little research.  Yes, flying an airliner across an ocean is boring.  That’s why the software designers behind the interface on said airliners have the computer keep asking the pilots to check things: keeps the pilots from zoning out.  I don’t know how quickly you can “reboot” the full control system in an airplane, but the last one I was on that did it took about fifteen minutes to even get the lights back on.  I doubt that would be fast enough to do (twice) in order to pull a plane out of a dive.  And if you are in a high-G curve to try and keep the plane out of the water, a sudden cessation of G-forces would mean that a) the plane had stalled (again) (very unlikely), or b) the wings had come off.  Neither of which would be a good thing.  (And, yes, the Spanair computer that was tracking technical problems at the time was infected with a virus, but, no, that had nothing to do with the crash.)

Russinovich’s writing is much the same as that of many mid-level thriller writers.  His plotting is OK, although the attempt to heighten tension, towards the end, by having “one darn thing after another” happen is a style that is overused, and isn’t very compelling in this instance.  On the down side, his characters are all pretty much the same, and through much of the book the narrative flow is extremely disjointed.

Overall, this is a reasonable, though unexceptional, thriller.  He was fortunate in being able to get Bill Gates and Howard Schmidt to write blurbs for it, but that still doesn’t make it any more realistic than the mass of cyberthrillers now coming on the market.

copyright, Robert M. Slade   2011     BKZERDAY.RVW   20111109

Share

PC Support Sites: Scams and Credibility

Just as 419-ers seem to have been permanently renamed in some quarters as “the Lads from Lagos”, I wonder if we should refer to those irritating individuals who persist in ringing us to offer us help (for a not particularly small fee) with non-existent malware as the “Krooks from Kolkata” (or more recently, the Ne’erdowells from New Delhi). It would be a pity to slur an entire nation with the misdeeds of a few individuals, but the network of such scammers does seem to be expanding across the Indian continent.

Be that as it may, I’ve recently been doing a little work (in association with Martijn Grooten of Virus Bulletin) on some of the ways that PC support sites that may be associated with cold-call scams are bolstering their own credibility by questionable means. Of course, legitimate businesses are also fond of Facebook likes, testimonials and so on, but we’ve found that some of these sites are not playing altogether nicely.

I’ve posted a fairly lengthy joint blog on the topic here: Facebook Likes and cold-call scams

David Harley CITP FBCS CISSP
ESET Senior Research Fellow

Share

History of crimeware?

C’mon, Infoworld, give us a break.

“There are few viable options to combat crimeware’s success in undermining today’s technologies.”

How about “don’t do dangerous stuff”?

“Crimeware: Foundation of today’s telescreens”

I’m sorry, what has “1984″ to do with the use of malware by criminal elements?

“Advancement #1: Form-grabbing for PCs running IE/Windows
Form grabbing, as its name implies, is the crimeware technique for capturing web form data within browsers.”

Can you say “login trojan”?  I knew you could.  They existed even before PCs did.

“Advancement #2: Anti-detection (also termed stealth)”

Oh, no!  Stealth!  Run!  We’re all gonna die!

Possibly the first piece of malware to use some form of stealth technology to hide itself from detection was a virus.  Perhaps you might have heard of it.  It was called BRAIN, and was written in 1986.

“Advancement #5: Source code availability/release
The source codes for Zeus and SpyEye, among the most sophisticated crimeware, were publicly released in 2010 and 2011, respectively.”

And the source code for Concept, which was, at the time, the most sophisticated macro virus (since it was the only macro virus), was released in 1995, respectively.  But wait!  The source code for the CHRISTMA exec was released in 1988!  Now how terrified are you!

“Crimeware in 2010 deployed the capability to disable anti-malware products”

And malware in 1991 deployed the capability to disable CPAV and MSAV.  With only fourteen bytes of code.  As a matter of fact, that fourteen byte string came to be used as an antivirus signature for a while, since so many viruses were included it.

“Advancement #7: Mobile device support (also termed man-in-the-mobile)”

We’ve got “man in the middle” and “meet in the middle.”  Nobody is using “man in the mobile” except you.

“Advancement #8: Anti-removal (also termed persistence)
As security solutions struggle to detect and remove crimeware from compromised PCs, malware authors are updating their code to permit it to re-emerge on PCs even after its supposed removal.”

I’ve got four words for you: “Robin Hood” and Friar Tuck.”

The author “has served with the National Security Agency, the North Atlantic Treaty Organization, the U.S. Air Force, and two Federal think tanks.”

With friends like this, who needs enemies?

Share

Nightmare on Malware Street

The Scientific American, no less, has published an article on malware.  Not that they don’t have every right, it’s just that the article is short on fact or help, and long on rather wild conjecture.

The author does have some points to make, even if he makes them very, very badly.

We, both as security professionals and as a society, don’t take malware seriously enough.  The security literature on the subject is appalling.  It is hard to find books on malware, even harder to find good ones, and well nigh impossible to find decent information in general security books.  The problem has been steadily growing since it was a vague academic topic, and has been ignored for so long that, now that it is a real problem, even most security experts have only a tenuous grasp of it.

Almost all reports do sound like paranoid thrillers.  Promoting the idea of shadowy genius figures in dark corners manipulating us at will, this engenders a kind of overall depression: we can’t possibly fight it, so we might was well not even try.  This attitude is further exacerbated but the dearth of information: we can’t even know what’s going on, so how can we even try to fight it?

It is getting more and more difficult to find malware, mostly because we are constantly creating new places for it to hide.  In the name of “user friendliness,” we are building ever more complex systems, with ever more crevices for the pumas to hide in.

Yes, then he goes off into wild speculation and gets all “Reflections on Trusting Trust” on us.  Which kind of loses the valid points.

Share

The “Immutable Laws” revisited

Once upon a time, somebody at Microsoft wrote an article on the “10 Immutable Laws of Security.”  (I can’t recall how long ago: it’s now listed as “Archived content.”  And I like the disclaimer that “No warranty is made as to technical accuracy.”)  Now these “laws” are all true, and they are helpful reminders.  But I’m not sure they deserve the iconic status they have achieved.

In terms of significance to security, you have to remember that security depends on situation.  As it is frequently put, one (security) size does not fit all.  Therefore, these laws (which lean heavily towards malware) may not be the most important for all users (or companies).

In terms of coverage, there is little or nothing about management, risk management, classification, continuity, secure development, architecture, telecom and networking, personnel, incidents, or a whole host of other topics.

As a quick recap, the laws are:

Law #1: If a bad guy can persuade you to run his program on your computer, it’s not your computer anymore

(Avoid malware.)

Law #2: If a bad guy can alter the operating system on your computer, it’s not your computer anymore

(Avoid malware, same as #1.)

Law #3: If a bad guy has unrestricted physical access to your computer, it’s not your computer anymore

(Quite true, and often ignored.  As I tell my students, I don’t care what technical protections you put on your systems, if I have physical access, I’ve got you.)

Law #4: If you allow a bad guy to upload programs to your website, it’s not your website any more

(Sort of a mix of access control and avoiding malware, same as #1.)

Law #5: Weak passwords trump strong security

(You’d think this relates to access control, like #4, but the more important point is that you need to view security holistically.  Security is like a bridge, not a road.  A road halfway is still partly useful.  A bridge half-built is a joke.  In security, any shortcoming can void the whole system.)

Law #6: A computer is only as secure as the administrator is trustworthy

(OK, there’s a little bit about people.  But it’s not just administrators.  Security is a people problem: never forget that.)

Law #7: Encrypted data is only as secure as the decryption key

(This is known as “Kerckhoffs’ Law.”  It’s been known for 130 years.  More significantly, it is a special case of the fact that security-by-obscurity [SBO] does not work.)

Law #8: An out of date virus scanner is only marginally better than no virus scanner at all

(I’m not sure that I’d even go along with “marginally.”  As a malware expert, I frequently run without a virus scanner: a lot of scanners [including MSE] impede my work.  But, if I were worried, I’d never rely on an out-of-date scanner, or one that I considered questionable in terms of accuracy [and there are lots of those around].)

Law #9: Absolute anonymity isn’t practical, in real life or on the Web

(True.  But risk management is a little more complex than that.)

Law #10: Technology is not a panacea

(Or, as (ISC)2 says, security transcends technology.  And, as #5 implies, management is the basic foundation of security, not any specific technology.)

Share

Conflicting AVs

Well behaved anitvirus programs can safely work together in peace and harmony.

Unfortunately, relatively few AVs are well behaved.

On my new desktop, I’ve got Avast (came with the machine, has a free version, and is a pretty good product) and MSE (it’s free, and it’s pretty safe for most users, although, as a professional, some parts of it irk me).  I’ve set both to ignore the virus zoo, although they aren’t too good at taking that restriction to heart.

MSE quarantined a few samples before I got things tuned.  Of course, it doesn’t have any function to get stuff out of “quarantine.”  (As I say, as a professional this is irksome, but, considering the average user, I’d say this is a darn good thing.)

Today Avast gave me a warning of some dangerous files.  They were the ones MSE quarantined.

(In case anyone is interested, the quarantine seems to be in \ProgramData\Microsoft\Microsoft Antimalware\LocalCopy.)

Share

Commoditizing Pay-Per-Install

We all know, I guess, about the professionalization of Internet crime and the diversification of the underground economy, but measuring it isn’t so easy.

ESET’s Aleksandr Matrosov and Eugene Rodionov have alluded to it in several papers and presentations with particular reference to TDSS, and we consolidated some of that material into an article (actually the first of a series of three articles on TDSS) that talks about the Dogma Millions and GangstaBucks affiliate models used in that context.

However, a paper on Measuring Pay-per-Install: The Commoditization of Malware Distribution by Juan Caballero, Chris Grier, Christian Kreibich, and Vern Paxson, is based on a measurement study implemented by infiltrating four PPI service providers: LoaderAdv (of which GangstaBucks is one of the brands), GoldInstall, Virut, and Zlob. The authors assert that 12 out of the top 20 malware families tracked by Fire Eye between April and June 2010, twelve were using PPI services to buy infections.

Lots of other interesting data there, too. Hat tip to Aleks for bringing it to my attention.

David Harley CITP FBCS CISSP
ESET Senior Research Fellow

Share

Dumb computer virus story recidivus

A few days ago, I noted a very silly news story about someone getting hit with a computer virus. Well, maybe the administrators don’t know all that much about malware, and maybe a smaller local paper reporter didn’t know all that much about it, either.

But now the story has been taken up by a company that makes security software. A “Microsoft Gold Certified Partner,” according to their Website. A company that makes antivirus software. And their story is just as silly, or even worse.

They say the local admin “stated that, the virus is classified as harmful and they are being quite alert.” I suppose that is all well and good, but then they immediately say that, “[a]ccording to him, the anti-virus firms were not able to recognize it …” So, AV firms don’t know what it is, but it is classified as harmful? Oh, but not to worry, “the good part is that it doesn’t seem to do extensive harm.” So, it’s harmful, but it’s not harmful. Well, of course it’s not harmful. It only “collects information and details, such as bank accounts and passwords …” No possible problem there. (Oh, and, even though nobody knows what it is, it’s Qakbot.)

Right, then. Would you be willing to buy AV software from a firm that can make these kind of mistakes in a simple news story?

Share