Early in this decade, well before I became assimilated by the anti-malware industry, I sat in my office in Birmingham (the one in the UK) and argued vehemently with another independent researcher (now deceased, sadly, but I won’t name him anyway).
He’d had an idea about the Code Red worm problem that was currently very high on the public radar: why not use the same infection mechanism to send a worm out looking for machines that hadn’t been patched to address the IIS vulnerability that Code Red exploited, and force vulnerable machines to patch?
As I remember, I argued that:
It would alienate him from other members of the research community: many of us have signed Codes of Conduct that would expressly forbid that approach
It would make assumptions about the target machines and their owners that he wasn’t entitled to make
It would involve unauthorized access and modification to other systems, which is specifically addressed in criminal legislation in many, many countries. (Including the United Kingdom, where we both lived, but I’ll come back to that.) So actively illegal (in some places) as well as ethically flaky.
It would add legitimacy to those malware authors who add minimal disinfection of other malware to their creations, probably in the forlorn hope of persuading a jury that their intentions were good, if they ever find themselves in dock
If you make a coding error in a non-replicative utility that causes damage to a system, there’s usually some means of fixing it, and at worst the damage is localised. If you make a coding error in a utility that self-replicates, then a lot of people are going to have to live with it, and you won’t be able to do much about it. Unless you want to get into a cycle of send out worm, send out worm to fix bugs in first worm, send out worm to fix bugs in second worm, send out… well, you get the idea. Too many potential bugs travelling by bug.
Well, he seemed convinced by my arguments: though “good” worms that took the same approach were discussed elsewhere and some examples of such code eventually made it into the outside world in some form, I have no reason to suppose that he had any connection with any of them.
Fast forward to 2009. The BBC’s Click program, to be screened on March 14th, “managed to acquire its own low-value botnet…after visiting chatrooms on the internet.” In order to demonstrate its own clevern… – sorry, in order to demonstrate “botnets’ collective power when in the hands of criminals” it set up “its” botnet to send pseudo-spam messages to a couple of email accounts they’d set up specifically for this purpose. Then the presenters used it to carry out a DDoS (Distributed Denial of Service) attack on a server belonging to a security company, with that company’s permission.
Then Click changed the Windows desktop wallpaper on the infected machines to let their owners or users know that their machines had been part of a botnet and advise them on steps to take to secure their machines, and “destroyed its botnet”. (I presume that means they removed or somehow deactivated the bot/agent malware on each infected machine.)
So what does this have to do with my deceased friend? Primarily, the Computer Misuse Act. As Graham Cluley has argued at some length and very convincingly on his blog today, the BBC’s actions may have put it at risk of contravening the UK’s primary legal defense against direct attacks on computer systems. The BBC tell us that they didn’t break the law because they had no criminal intent.
As Ken Bechtel once remarked, AV researchers would make poor lawyers because they’re incapable of passing the bar. Well, I’m not in a bar at the moment, but I’m not a lawyer either, so don’t take this as being in the least authoritative. But I have to wonder whether Click passed this in front of the Beeb’s legal department.before they undertook this exercise.
As I understand it, the defense of criminal intent has been defined in English law as “the decision to bring about a prohibited consequence”. The 1990 Act defines the computer misuse offences as:
1. Unauthorised access to computer material.
2. Unauthorised access with intent to commit or facilitate commission of further offences.
3. Unauthorised modification of computer material.
The Act also defines an individual’s guilt according to whether he uses a computer to “secure access” to a program or data held in any computer, whether the’s authorised to secure that access, and whether he knows that his access is unauthorised. I don’t think there’s any doubt that the BBC were not “authorised” to access or modify programs or data on these machines by their owners.
In some jurisdictions, there’s a potential defence where no measures were taken to protect the victim’s machine, but an amendment to introduce that possibility into the 1990 act was rejected.
Criminal liability is, apparently, normally measured according to whether (a) a criminal act was committed (b) the person who committed the act intended to commit a criminal act. So intent (mens rea, often freely translated as “guilty mind”) is important. But in this case, I suspect that if the incident went to court, the question might be not “did the defendant intend to break the law?” in the general sense of becoming a “real” botherder, but in the sense of committing an offence (actus reus, a criminal act) under the provisions of specific legislation. However benevolent its intentions, did the BBC know it was in breach of the Computer Misuse Act? Did they actually buy a botnet? (If so, they might want to bear in mind the case of virus author Christopher Pile, one of the few people actually convicted under the CMA, who was convicted of knowing inciting others to cause unauthorised modification, as well as doing so himself.
As far as I can tell from the BBC’s article, the program presenters were perfectly aware that they had no authorisation to access any of those 22,000 machines. As far as I can tell from the wording of the Act (but remember that I have no legal training whatsoever!), it doesn’t take into account the fact that it might be broken for benevolent purposes: either your access is authorised, or it isn’t.
On the plus side, little or no “real” harm was done. The BBC sent itself multiple email messages to two accounts specifically created to receive them. Perhaps Prevx’s reputation has suffered slightly from the revelation that the server against which they allowed the BBC to launch a DDoS attack became inaccessible so quickly: according to Click, it took just 60 machines to bring it to its knees. But perhaps it was configured to collapse easily, for a more effective demonstration.
The unprotected machines were presumably (at least temporarily) relieved of the malware which gave the BBC access in the first place, and hopefully some of their owners learned something from the experience. (I have to wonder whether and how the BBC were actually able to check that their action didn’t have any ill effects on all 22,000 of those systems…)
I don’t know if the BBC or the Click presenters are guilty of anything in legal terms: I do think they’ve failed to think things through properly…
David Harley BA CISSP FBCS CITP
Small Blue-Green World
Director of Malware Intelligence ESET