data loss redux: thinking organically

Originally posted to Black Cats and Smoke and Mirrors

A little while ago I wrote about DLP, or Data Loss Prevention, and how the term is something of a red herring because, in reality, everything we do is about preventing data loss; ergo, the concept can’t be neatly productized. I still feel that way.

However, a few days after I posted it, I was contacted by a fellow named Pablo Osinaga, who has co-founded a startup called Kormox. He wanted me to see his company’s DLP solution, profiled by SC Magazine.

After reading SC’s blurb on the subject, I was quite intrigued, and arranged a web/phone meeting with Mr. Osinaga. For a little over an hour, we discussed Kormox and the concept of DLP.

As I said, DLP is a very difficult concept to productize. Everyone needs to prevent the loss or leakage of data, but everyone — every enterprise, every business, every organization, even every person — has different data and different types of data that they need to protect. Some organizations are concerned with mobile data; some are concerned with file shares; some are concerned with PII; and so on. No one vendor — no one product — has a fully comprehensive DLP solution because what DLP means is so dependent on each organization’s mission and needs, which not only differs among organizations but can be subject to change within an organization over time.

One of the first things that Mr. Osinaga mentioned, in presenting his company’s solution, was that enterprises have become more organic and less structured. I could not agree more. I have worked for many different security solutions vendors, and I hear over and over about the “special snowflake syndrome”, how every organization thinks they are “different” in some way, but they are really all the same. The trend, with every security vendor I’ve worked with, is to pigeonhole potential and existing customers, to basically tell them that they can’t have what they say they want, to fit them to the solution that the vendor has, in their infinite wisdom, envisioned and created. Yet as time goes on, and as Mr. Osinaga noted, enterprise structure is becoming more fluid, less definable, and less able to be pigeonholed.

Kormox’s solution starts with data classification. It’s so simple, and so logical. Of course you have to classify your data. But it’s not enough to say “I have to protect medical records” or “I have to protect credit card numbers”. In the DLP-productization game, vendors talk about what kind of data you want to protect, and then they talk about how they’re going to protect it, but they don’t really cover the territory of what, exactly, your data means to the people who are using it. That’s your problem.

And that’s how Kormox differentiates itself from the crowd: data classification is a major step, and it involves finding out not only what the data is (as opposed to merely what kind of data), but the flow of the data: where it is, who is using it, how they use it, where it’s going, where it’s been, and so on. All this is part of the classification, and it brings DLP back to the true “asset management” model of Information Security, where the asset is the data itself, not the (often fungible) hardware on which it rests.

After the data has been classified, the product allows the asset owners to implement controls in a similarly organic fashion. In essence, it takes the organization from the situation of “I know I need to protect our data” to “I know where and what all our data is, how it’s used, and what controls are on it” — something that no other DLP solution does.

I’m not laboring under an illusion that this product is perfect; no product could be. But I do think that Kormox is going in a necessary direction with their concept of data flow as a part of classification. At the moment it’s a bit clunky looking, but from what I saw in our meeting, it is definitely worth a look.

I’d like to note that I am in no way compensated for writing about Kormox; I’m writing about it because Mr. Osinaga contacted me as a result of my last DLP article, and so I thought it was only fair to talk about what I found out in our meeting.

Share hacked… via blind sql injection

More information here.


Dumb computer virus story recidivus

A few days ago, I noted a very silly news story about someone getting hit with a computer virus. Well, maybe the administrators don’t know all that much about malware, and maybe a smaller local paper reporter didn’t know all that much about it, either.

But now the story has been taken up by a company that makes security software. A “Microsoft Gold Certified Partner,” according to their Website. A company that makes antivirus software. And their story is just as silly, or even worse.

They say the local admin “stated that, the virus is classified as harmful and they are being quite alert.” I suppose that is all well and good, but then they immediately say that, “[a]ccording to him, the anti-virus firms were not able to recognize it …” So, AV firms don’t know what it is, but it is classified as harmful? Oh, but not to worry, “the good part is that it doesn’t seem to do extensive harm.” So, it’s harmful, but it’s not harmful. Well, of course it’s not harmful. It only “collects information and details, such as bank accounts and passwords …” No possible problem there. (Oh, and, even though nobody knows what it is, it’s Qakbot.)

Right, then. Would you be willing to buy AV software from a firm that can make these kind of mistakes in a simple news story?


APT! Kill it! Kill it!! Kill it!!!

Argh! Another dozen APT stories in the last couple of days! Will no one rid me of this meddlesome buzzword?

(No, I don’t expect an answer to that question. Yes, I know it’s a media meme. I just wish security professionals, who should know better, would stop using it.)

Quick tip: in order to identify useless stories that use the term, check to see if the author, at the beginning, clearly defines what an APT is. Those that do not are garbage. (That would be all of them.) Is it advanced? No, APTs use malware we already know about: viruses, trojans, remote access trojans (RATs), keyloggers, that sort of thing. APTs use social engineering (aka “lying”) in order to get users to install malware. (That’s hardly new or advanced.) Is it persistent? Well, in many cases that’s true: a lot of these attacks go on over time, but that’s not particularly new: even Cliff Stoll’s “wiley hacker” kept it up for years. (Don’t know who Cliff Stoll is? Kids these days. Go away and do some actual research and learn about the field before you start trying to tell me that APT is an actual thing.) Is it a threat? Yes, but so are a lot of things.

The latest article I’ve seen, this morning, says that an “APT occurrence is a low-frequency high-impact incident.” Oh, good. An APT is a Black Swan. As Lady St. Hillier would say, “Good. Very specific.”


Codegate 2011

Korean is a tricky language. It is probably the easiest language on the planet to read and write in, especially for geeks.

It takes literally hours to learn: if you have any background in breaking codes as a hobby, you will be able to learn to read and write Korean fully, within the day. Now you can read signs, read most of the newspaper and decipher the airplane safety card on Korean Airlines.

But reading is not understanding, and this is where the trap springs. While its writing is possibly the easiest of all languages, the vocabulary/grammar part is one of the hardest that exist. Forget hash functions: identical Korean sentences can look totally different just because you’re speaking to your father instead of your son; Ask a few native Koreans how to say “the Apple is red”. I have 3 different answers so far (with no resemblance whatsoever to one another). The real code here is the semantics. It’s like doing a simple XOR cypher to a book cipher. What a clever trick.

But by the time I hit the brick wall with the honorifics, Subject-Object-Verb and impossible pronunciation I was already too deep in to stop. Plus, I never let security by obscurity stop me. Though in this case, I have to mention they’ve perfected their obscurity to impressive levels.

So I was very excited when I was asked to speak at Codegate 2011 in Seoul. It looks like a really fun conference. If you are in Seoul or the area, I recommend it.
I will be speaking on April 5th, and don’t expect too much: the Korean part of my lecture won’t go beyond Annyeong haseyo and je ireum eun Abiram imnida. And even that will be with incomprehensible pronunciation so bad they might have to subtitle that part.

In any case, if you are in the conference, come say hello and test my Korean. Just don’t be offended if I get my honorifics completely wrong.

Update: The correct date is April 5th and not as I originally wrote.



Oh, look!  I got blog post 1500!

(Sorry.  It’s the only time I’ve gotten a good, round number when posting anything  :-)


The decline of credit cards

At the BC ISMS User Group meeting last week we were concentrating on the relationship between the ISO 27000 family of standards, and the PCI-DSS (Payment Card Industry Data Security Standards, usually just known as PCI).  PCI-DSS is of growing concern for pretty much anyone who does online retail commerce (and, come to that, anyone who does any kind of commerce that involves any use of a credit card).

It kind of crystalized some ideas that I’ve been mulling over recently.

Over the past year or so, I’ve been examining some situations for small charitable organizations, as well as some small businesses.  Many would like to sell subscriptions, raffle tickets, accept donations, or sell small, specialty items over the net.  However, I’ve had to consistently advise them that they do not want to get involved with PCI: it’s way too much work for a small company.  At the same time, most small Web hosting providers don’t want to get involved in that, either.

The unintended end result consequence of PCI is that small entities simply cannot afford to be involved with credit cards anymore.  (It’s kind of too bad that, a decade ago, MasterCard and Visa got within about a month of releasing SET [Secure Electronic Transactions] and then quit.  It probably would have been perfect for this situation.)

Somewhat ironically, PCI means a big boost in business for PayPal.  It’s fairly easy to get a PayPal account, and then PayPal can accept credit cards (and handle the PCI compliance), and then the small retailer can get paid through a PayPal account.  So far PayPal has not created anything like PCI for its users (which is, again, rather ironic given the much wilder environment in which it operates, and the enormous effort phishing spammers make in trying to access PayPal accounts.)  (The PayPal Website is long on assurances in terms of how PayPal secures information, and very short on details.)

This is not to say that credit cards are dead.  After all, most PayPal purchases will actually be made with credit cards: it’s just that PayPal will handle the actual credit card transaction.  Even radical new technologies for mobile payments tend to be nothing more that credit card chips embedded in something else.

These musings, though, did give a bit more urgency to an article on F-commerce: the fact that a lot of commercial and retail activity is starting to happen on Facebook.  Online retail transactions aren’t new.  They aren’t even new in terms of social networks or a type of currency created within an online system.  Online game systems have been dealing with the issue for some time, and blackhats have been stealing such credits and even using them to launder money for a number of years now.  However, the sheer size of Facebook (third largest “national population” in the world), and the fact that that entire population is (by selection) quite affluent means that the new Facebook credit currency may very quickly balloon to an enormous size in relation to other currencies.  (We will leave aside, for the moment, the fact that I personally consider Facebook to be tremendously divisive to the Internet as a whole.  And that Facebook does not have the best record in terms of security and privacy.)  Creation of wealth, ex nihilo, on a very, very large scale.  What are the implications of that?


Dumb computer virus story

I really don’t know who is more ignorant here, the city authorities “protecting” the computers, or the journalist writing up the story

If you know anything about the technology, this is howlingly funny (or, it would be, if it weren’t so sadly representative …)

“Officials at Nanaimo city hall are desperately working to find out how a virus attacked their computer system Wednesday afternoon.”


“Per Kristensen, director of information and technology, said he was shocked by how quickly the virus infected the system.

“The first time anyone anywhere in the world noticed this new virus was on [March 15] and then it hit us on the 16th,” he said Thursday.”

(How many new viruses are “created” every day, these days?)

“People can be assured that all their information is secure. Protection of their personal information is a priority. The city’s system won’t be turned on until we are confident we have this solved,” he said.

(Ummm, how are you going to clean up the computers if they are turned off?)

“Kristensen said the virus is so new, it has no signature that security devices can recognize.”

(Let me guess: a certain antivirus in a yellow box couldn’t recognize it, so you figure that nobody can, right?)

“We’ve got multiple levels of protection and firewalls, but nothing recognizes this.”

(Yeah, firewalls do a GREAT job against viruses …)

“We may have to shut down throughout the weekend and we won’t put the system back up until we know we have this under control. And right now, we don’t know how long that will be.”

(Based on this, I’m not holding my breath …)


data loss prevention: a red herring

Originally posted to Black Cats and Smoke and Mirrors.

A few years ago, the acronym DLP, which stands for Data Loss (or Leakage) Prevention, hit the security market. Every enterprise was crazy for it, every vendor touted it, and everybody had a different idea of what, exactly, it was.

Half a decade has passed and we still don’t know. The problem is that DLP is a misleading term, because preventing data loss is the key reason for information security in the first place. If you think about it, every component of your enterprise’s security solution, from policy to compliance reports, is in place to prevent your data from being lost or leaked.

There is no panacea for the problem of potential data loss, no matter what your vendor of choice might tell you. The smartest vendors don’t even try to claim such a thing. Because nobody can agree on what, exactly, DLP is, nobody has a complete solution. However, the industry in general does agree on a few key concepts:

- A product that can recognize credit card / SSN / other identifying data both at rest and in motion and (better) control the transmission of such data is a necessary part of your security solution, if you deal with such data

- A product that can tag certain types of files and control the transmission (in whole or in part, encrypted or not) of those files is key

- A product that can recognize certain types of removable storage device being attached and/or written to and IMMEDIATELY control this activity is important

- If your business employs “mobile warriors” and you do not implement some sort of whole disk and file encryption, your data is at risk

- If your employees use mobile phones for business purposes then you should have some control over what type of data they can access on those devices

These are just a few of the concepts behind DLP, and those concepts keep changing as new risks are discovered. Adding to the complexity is the fact that some issues are going to apply to some enterprises, whereas other issues will be unimportant. For instance, the DoD never, ever transmits SSNs in the clear. However, many private sector businesses transmit SSNs in the clear as a matter of course (although they shouldn’t). Ergo, when the DoD talks about data leakage, they are most often concerned with SSNs and other types of personally identifying information (referred to as PII), but protecting credit card information is not so much a concern. The private sector, on the other hand, is much more occupied with protecting credit card information but not so much (say) SSNs, driver license numbers, and other types of PII. Ergo, the part of the DLP solution that identifies certain types of data at rest and in motion needs to be flexible and customizable to be useful for the environment it’s being used in.

Whole disk and file encryption is probably the easiest piece of the DLP pie to choose and to implement. In fact, you can get your whole disk encryption from one source and your file encryption from another, and as long as they don’t fight with each other you’re fine (as long as you remember that nothing is 100% foolproof, that is). But after that, it gets more complex, and vendors only make it worse when they try to convince you that their solution does everything you need for DLP. Well, no, it doesn’t.

A smart executive will realize that DLP is not a single concept, and certainly not a single product; rather, it’s a method. The first thing to do is to revisit your security policy. If you do not have a section detailing the specific types of data that you need to protect from loss/leakage and some (probably non-vendor-specific) methods for doing so, then it is time for a rewrite. [Note: you should be revisiting and perhaps editing your security policy on at least a quarterly basis anyway.] Sit down with your fellow executives and brainstorm your data pitfalls, and then do the courtship dance with vendors who claim to have solutions to these pitfalls. Again, do not fall into the trap of the One True Solution. It doesn’t exist.

As you work on your DLP method, you will see that many of your current solutions and/or their vendors already work towards securing your data…of course, because that, as I said, is the entire point of infosec. For instance, your vulnerability scanner already scans for removable storage devices (both currently inserted and having been inserted at any time). That’s great, but it’s asynchronous. Does the vendor have a real-time solution (agent or sniffer based) that does the same thing? You already have auditing in place to determine if a file’s been touched. How about if it’s been excerpted and transmitted without being edited? And so on. If your current vendors have addons that can fit your newly-perceived needs, then that can perhaps save you money and implementation time.

One big problem with potential data leakage is that many businesses, to save money, don’t issue their employees mobile phones but rather reimburse the employee if his or her existing phone is used for business purposes. However, in many cases, “business purposes” doesn’t mean just calls; if an employee is using a smartphone, he or she is probably also downloading and responding to email and possibly also VPN’ing into the network and accessing corporate resources. If you’re not virus scanning and otherwise protecting his phone in the event of theft and other compromise, then all the time and expense that you’ve gone to in implementing disk and file encryption on his laptop is pretty much useless.

All of this is a lot to think about. The good news, especially if you are a smaller business, is that you don’t have to think about it and implement it all at once. This is why you should be always spiralling back to your security policy in order to revist your business’s current needs. Each time, you can tighten up your data security a little more.


RSA APT thoughts

By now people are starting to hear that RSA has been hit with an attack.  Reports are vague at best, and we have very little idea how this may affect RSA customers and security in general.  But I’d like to opine about a few points.

First, we, in the profession of information security, are still not taking malware seriously enough.  Oh, sure, most people are running antivirus software.  But we don’t really study and understand the topic.  Malware gets extremely short shrift in any general security textbook.  Sometimes it isn’t mentioned at all.  Sometimes the descriptions are still based on those long-ago days when boot-sector infectors ruled the earth.  (Interesting to see that they are coming back again, in the form of Autorun and Autoplay, but that’s simply another aspect of Slade’s Law of Computer History.)  Malware has gradually grown from an almost academic issue to a pervasive presence in the computing environment.  It’s the boiling frog situation: the rise in threat has been gradual enough that we haven’t noticed it.

Second, we aren’t taking security awareness seriously enough.  These types of attacks rely primarily on social engineering and malware.  Security awareness works marvelously well as a protection against both.  RSA is a security corporation: they’ve got all kinds of smart people who know about security.  But they’ve also got lots of admin and marketing people who haven’t been given basic training in the security front lines.  For a number of years I have been promoting the idea that corporations should be providing security awareness training.  Not just to their employees, but to the general public.  For free.  I propose that this is not just a gesture of goodwill or advertising for the companies, but that it actually helps to improve their overall security.  In the modern computing (and interconnected communications) environment, making sure somebody else knows more about security means that there is less chance that you are going to be hit.

(Third, I really hate that “APT” term.  “Advanced Persistent Threat” is pretty meaningless, and actually hides what is going on.  Yes, I know that it is embarrassing to have to admit that you have been tricked by social engineering [which is, itself, only a fancy word for "lying"] and tricked badly enough that somebody actually got you to run a virus or trojan on yourself.  It’s so last millennium.  But it’s the truth, and dressing it up in a stylish new term doesn’t make it any less so.)


REVIEW: “Making, Breaking Codes: An Introduction to Cryptology”, Paul Garrett

BKMABRCO.RVW   20101128

“Making, Breaking Codes: An Introduction to Cryptology”, Paul Garrett, 2001, 978-0-13-030369-1
%A   Paul Garrett
%C   One Lake St., Upper Saddle River, NJ   07458
%D   2001
%G   978-0-13-030369-1 0-13-030369-0
%I   Prentice Hall
%O   800-576-3800 416-293-3621 +1-201-236-7139 fax: +1-201-236-7131
%O   Audience a- Tech 2 Writing 1 (see revfaq.htm for explanation)
%P   523 p.
%T   “Making, Breaking Codes: An Introduction to Cryptology”

The preface states that this book is intended to address modern ideas in cryptology, with an emphasis on the mathematics involved, particularly number theory.  It is seen as a text for a two term course, possibly in cryptology, or possibly in number theory itself.  There is a brief introduction, listing terms related to cryptology and some aspects of computing.

Chapter one describes simple substitution ciphers and the one time pad.  The relevance to the process of the sections dealing with mathematics is not fully explained (and neither is the affine cipher).  Probability is introduced in chapter two, and there is some discussion of the statistics of the English language, and letter frequency attacks on simple ciphers.  This simple frequency attack is extended to substitution ciphers with permuted (or scrambled, but still monoalphabetic) ciphers, in chapter three.  There is also mention of basic character permutation ciphers and multiple anagramming attacks.  Chapter four looks at polyalphabetic ciphers and attacks on expected patterns.  More probability theory is added in chapter five.

Chapter six turns to modern symmetric ciphers, providing details of the DES (Data Encryption Standard) as examples of the principles of confusion, diffusion, and avalanche.  Divisibility is important not only to the RSA (Rivest-Shamir-Adlemen) algorithm, but, in modular arithmetic, to modern cryptography as a whole, and so gets extensive treatment in chapter seven.  The Hill cipher is used, in chapter eight, to demonstrate that simple diffusion is not sufficient protection.  Complexity theory is examined, in chapter nine, with a view to determining the work factor (and sometimes practicality) of a given cryptographic algorithm.

Chapter ten turns to public-key, or asymmetric, algorithms, detailing aspects of the RSA and Diffie-Hellman algorithms, along with a number of others.  Prime numbers (important to RSA) and their characteristics are examined in chapter eleven, and roots in twelve and thirteen.  Multiplicativity, and its weak form, are addressed in fourteen, and quadratic reciprocity (for quick primality estimates) in fifteen.  Chapter sixteen notes pseudoprimes, which can complicate the search for keys.  Basic group theory, covered in chapter seventeen, relates to Diffie-Hellman and a variety of other algorithms.  Diffie-Hellman, along with some abstract algorithms, is reviewed in chapter eighteen.  Rings and fields (in groups) are noted in chapter nineteen, and cyclotomic polynomials in twenty.

Chapter twenty-one examines a few pseudo-random number generation algorithms.  More group theory is presented in twenty-two.  Chapter twenty-three looks at proofs of pseudoprimality.  Factorization attacks are addressed in basic (chapter twenty-four), and more sophisticated forms (twenty-five).  Finite fields are addressed in chapter twenty-six and discrete logarithms in twenty-seven.  Some aspects of elliptic curves are reviewed in chapter twenty-eight.  More material on finite fields is presented in chapter twenty-nine.

Despite the title, this is a math textbook.  You will need to have, at the very least, a solid introduction to number theory to get the benefit from it.  Even at that, the application, and implications, of the mathematical material to cryptology is difficult to follow.  The organization probably also works best in a math course: it certainly seems to skip around in a disjointed manner when trying to follow the crypto thread, and apply the math to it.  For all its faults, “Applied Cryptography” (cf. BKAPCRYP.RVW) is still far superior in explaining what the math actually does.

copyright, Robert M. Slade   2010     BKMABRCO.RVW   20101128


Calm acceptance vs self-help

As an emergency services volunteer, I’ve been looking for stories about how the Japanese have been handling displacements, evacuations, and those left homeless following the quake and tsunami.  Oddly, despite having all kinds of video and pictures coming from various areas of Japan, these stories seem to be missing (possibly pushed out of the news-stream by boats running over cars, and a steaming reactor).

Yesterday I started to see a few, some noting that the Japanese culture of calm acceptance was contributing to orderly lines and a lack of panic.  (And then saw some reports that a lack of action by the government was starting to wear on the calm acceptance.  Six days after the quake, food and water aren’t getting through to areas which are only as far apart as Ottawa is from Toronto, or Boston from Baltimore.)

So I was intrigued to find, this morning, this report of someone running counter to his own culture.

(And, once again, I’ll take the opportunity to promote the idea that all security professionals should consider getting training as emergency services volunteers.  You’ll know what to do in or for an emergency, you’ll be a help intead of a drain, and, in the meantime, you can probably apply it to BCP, and get CPE credits for your training.)


DD-WRT Fuzzing and Monitoring

We recently got a request for a vendor who has taken upon itself to add some interesting stuff to the DD-WRT router to provide him with some form of monitoring that would integrate with our beSTORM fuzzer.

Regular monitoring inherently built into beSTORM which include ARP, ICMP Echo, UDP/TCP Ping and remote debugging weren’t quite up to it – ARP, ICMP Echo and UDP/TCP ping could not tell the vendor when the router was expecting heavy load due to our test which was one of the criteria he has defined inside beSTORM as being an exception (a vulnerability).

Our typical backup option is a gdb-style remote debugger, but the DD-WRT’s debugger doesn’t easily provide that information, therefore we have built a simple monitoring agent that can connect to the DD-WRT web interface and query the load value of the router. When a certain value (above a certain number) is reached an exception is reported back to beSTORM.

This little neat trick allowed the vendor to identify several strange packets that can cause his modified router to become unresponsive (take more than a few seconds to respond), as well as detect when the router was responsive but the load on it was unusually high.

The script is now bundled with the full version of beSTORM, feel free to get the latest version and look into it. A trial is always available here. It’s also available below:

# Copyright Beyond Security 2011
# beSTORM support:

use strict;
use Getopt::Long;
use LWP::UserAgent;
use IO::Socket;

my @children;
my $beSTORM_port = “6969″;
my $beSTORM_ip = “″;
my $router_ip = “″;
my $router_username = “root”;
my $router_password = “admin”;

my $pingTimeout = 1; #ping every x seconds
my $bContinue = 1; #Stay in loop.

#Install signal handlers
$SIG{ABRT} = \&signaled;
$SIG{INT} = \&signaled;
$SIG{HUP} = \&signaled;

my $options = { };
‘host=s’ => \$options->{‘bH’},
‘port=i’ => \$options->{‘bP’},
“router=s” => \$options->{‘rH’},
“username=s” => \$options->{‘rU’},
“password=s” => \$options->{‘rP’},

#Sanity check
my $bPrintUsage = 0;
if (! $options->{‘bH’} ) {
$bPrintUsage = 1;
print “No host value has been provided\n”;
if (! $options->{‘rH’} ) {
$bPrintUsage = 1;
print “No router value has been provided\n”;

if ($bPrintUsage) {
exit 0;

$beSTORM_ip = $options->{‘bH’};
$beSTORM_port = $options->{‘bP’};
if (not defined $beSTORM_port) {
$beSTORM_port = 6969;

$router_ip = $options->{‘rH’};
$router_username = $options->{‘rU’};
if (not defined $router_username) {
$router_username = “root”;

$router_password = $options->{‘rP’};
if (not defined $router_password) {
$router_password = “admin”;

while ($bContinue) {
my $ua = LWP::UserAgent->new;

my $URL = “http://$router_username:$router_password\@$router_ip” . “/”;
print “Connecting to: $URL\n”;
my $response = $ua->get($URL);

my $content = “”;
if ($response->is_success) {
$content = $response->decoded_content; # or whatever
else {
send_notification($beSTORM_ip, $beSTORM_port, “Failed to receive response from router’s web server: “.$response->status_line);

my $load = “”;
if($content =~ /, load average: ([^}]+)\}/gs) {
$load = $1;
} else {
print “Failed to find load average inside content: [$content]\n”;
send_notification($beSTORM_ip, $beSTORM_port, “Failed to locate load average value”);

print “$load\n”;

sub send_notification {
my $Host = shift;
my $Port = shift;
my $Exception = shift;
print STDERR “\n\nSending to $Host:$Port this exception: [$Exception]\n\n\n”;

my $sock = IO::Socket::INET->new(
Proto => ‘udp’,
PeerPort => $Port,
PeerAddr => $Host,
) or die “Could not create socket: $!\n”;

print STDERR “Exception: [$Exception]\n”;
$sock->send($Exception) or die “Send error: $!\n”;

$bContinue = 0;

sub usage
print “\nUsage: $0 –host [--port ] –router \n\n”;
print “\t–host beSTORM client host\n”;
print “\t–port beSTORM client UDP port for exception information (default 6969)\n”;
print “\t–router the Router being monitored\n”;
print “\t–username used by the router to authenticate (root)\n”;
print “\t–password used by the router to authenticate (admin)\n”;

#Ping beSTORM host that we are alive every $timeout
sub start_notifier
my $timeout = shift;
if (! defined $beSTORM_ip) {return; };

my $pid= fork();
if ($pid < 0)
die "Could not fork\n";
if ($pid > 0)
push @children, $pid;
if ($pid == 0)
print “Starting beSTORM notifier. Will send heartbeat to $beSTORM_ip every $timeout second(s)\n”;
while ($bContinue)
my $sock = IO::Socket::INET->new(Proto => ‘udp’,
PeerAddr => $beSTORM_ip,
PeerPort => ’6970′,
) or die “socket: $@”;
print $sock “NOOP”;
close $sock;
print “beSTORM notifier Stopped\n”;
exit 0;

sub stop_notifier
my $sig = shift;
print “Shutting down beSTORM notifier (it may take up to 5 seconds to stop)\n”;
if (@children)
print “Signaling: (@children) with sig $sig\n”;
kill $sig, @children;

sub signaled
my $sig = shift;
print “Recieved signal $sig. Shutting down\n”;
$bContinue = 0;

#The end


Japan Disaster Commentary and Resources

It probably hasn’t escaped your notice that there’s a lot of malware/SEO/scamming whenever a major disaster occurs. A few days ago I started to put together a list of commentary (some of it my own) and resources relating to the Japanese earthquake and tsunami, in anticipation of that sort of activity.

Originally, I was using several of my usual blog venues, but decided eventually to focus on one site. As ESET had no monopoly on useful information, I wanted to use a vendor-agnostic site. Actually, I could have used this one, but for better or worse, I decided to use the AVIEN blog, since I’ve pretty much taken over the care and feeding of that organization. The blog in question is Japan Disaster: Commentary & Resources.

It’s certainly not all-inclusive, but it’s the largest resource of its type that I’m aware of. Eventually, it will be organized more so as to focus again on the stuff that’s directly related to security, but right now, given the impact of the crisis, I’m posting pretty much anything that strikes me as useful, even if its relevance to security is a bit tenuous.

I’m afraid I’m going to post this pointer one or two other places: apologies if you trip over it more often than you really want to!

ESET Senior Research Fellow


Unreal reality

When I was a teenager, back when dinosaurs ruled the earth, and Disneyland was the only Disney amusement park, I was taken to said theme park for the first time.  I was immediately struck by the total artificiality of the place, and the fact the everybody wanted it to be so.  For some reason I could not get the idea out of my mind, that if you dug a pit, entrenching sharpened stakes on the floor of it, and put ropes up to manage the line, people would line up and jump in.

I was forcibly reminded of this by a story about the coverage of the Japanese quake and tsunami, and the use of smartphones and social media to document the event and disseminate the information as never before.  We are used to “reality” television which is completely unreal, and an unusual reality strikes us as fantastic.

And I’d like to reiterate my advice to prepare for the next disaster: get trained in emergency management and response.


Great new security tech, or fraud?

While at CanSecWest, I was noting a news story about how somebody had, yet again, defrauded the US government and military by selling them a terribly sophisticated computer algorithm that promised to find secret information about enemies and/or terrorists, but actually didn’t work.  I suspect that this will be a complex case, since the vendor will undoubtedly claim that his work is so sophisticated and complicated that it does work, it’s just that the users didn’t understand it.

In view of this, I found it really interesting to note a very similar case, just a few days later.  Computerized Voice Stress Analyzers (CVSAs) have been promoted and sold for a least 25 years now.  This despite the fact that, four years ago, the U.S. Department of Justice did a study and concluded that “VSA programs show poor validity -neither program efficiently determined who was being deceptive about recent drug use. The programs were not able to detect deception at a rate any better than chance … The data also suggest poor reliability for both VSA products when we compared expert and novice interpretations of the output.”

In a sense the CVSA case is much worse, because, since it is a private company selling to private companies, there is nobody to say that these people are a) wasting money, and b) making poor hiring decisions based on what is essentially a coin flip.