Hiring Hackers – as speakers (part 2)

Continuing from Hiring Hackers – as speakers (part 1):

Are those who conduct breaches and intrusions of computer systems important sources of information?

I suppose it seems intuitively obvious that the answer is “yes.”  After all, these are the people who are breaking into the things we want to protect: surely they know how.  However, with a little consideration, the “obvious” answer evaporates.

First of all, in purely logical terms, it is not necessary that those who break into systems know all possible ways to do so.  In practice, it is true that many attacks these days involve multiple vulnerabilities, but logically it is only required that the attacker knows one.  This truism is well known, in slightly different form, in relation to testing and systems development: testing can be used to prove the presence of bugs, but never their absence.  Or, as I frequently point out in relation to system security, the attacker has a much easier job than the defender.  The defender must be correct in every single instance and activity.  The intruder only has to be right once.

Therefore, the interloper has the easier job, and can afford to be lazy.  If they can be lazy, they probably will be lazy: that is human nature.  (After all, a number of people would argue that blackhats have already shown themselves to be morally lazy.)  As the proverb has it, everything is always in the last place you look.  Once you’ve found it, why keep on looking?

(Oh, curiosity, you say?  Well, curiosity is great: it keeps us learning.  But it is hardly the exclusive preserve of those on the wrong side of the law  In addition, properly identifying, researching, and documenting what you find, in such a way that it will be useful to others, tends to require a lot of boring work, and discipline.)

So, at the very least, we can say that attackers have no advantage in terms of scope and a comprehensive view of vulnerabilities, and may be at a disadvantage.

Do intruders have any advantage in depth of knowledge?  This is almost impossible to answer in any meaningful way, of course.  Individuals vary in knowledge, comprehension, analytic ability, and creative or imaginative thought.  Despite years of attempts to create testing instruments and metrics for cognitive processes, we have only the most general ability to predict a specific person’s accomplishments in the real world.  We do know that ability varies widely, and it would be foolish in the extreme to contend that all whitehats would be as able as any given blackhat.

However, that said, I would suggest that it should be possible to assert that, collectively, security professionals are more knowledgeable than intruders.  This is due to my earlier argument: those people who have had more demands (even sometimes arbitrary demands) placed upon them will have more discipline (and more background) to address the problem.

The argument is sometimes made that we should study “successful” exploits.  The hypothesis here is a bit harder to dissect: after all, a “successful” exploit is simply one that works.  It is true that certain attacks are more effective in a given environment, and that intrusions or infections which work over very large numbers of systems tend to involve a number of factors, not all of them technical.  Historically, though, it seems to be that the most astounding and newsworthy of attacks are as much a surprise to their authors as they are to the rest of us.  It is unlikely, in the extreme, that our adversaries have these events fully planned, or understand all the determinants of an overpowering offensive.

It is a truism that two heads are better than one: this is recognized by fields as diverse as auditing and extreme programming.  This statement is formalized, in the open source community, by Linus’ Law: with sufficiently many eyeballs, all bugs are shallow.  Most systems professionals would recognize that the more people examine a system, the better (in terms of identification of vulnerabilities).  The “Hire a hacker” crowd tends to jump on this in advancing their cause: why not listen to the attackers when they come up with a new exploit?

This, however, is a spurious argument.  There is no choice between listening to an intruder or not knowing about the vulnerability at all.  Once a vulnerability is known, it can be explained by anyone who understands it, and can present it accurately and clearly.

Which brings up a final point.  As I said in the earlier piece, blackhats tend to have more-than-healthy egos.  Yet their opinion of their own prowess is seldom supported by the materials they produce in evidence.  I’ve read a great many “zines” produced by those in that community (and even the occasional book ostensibly written by a reformed or active hacker) and almost never have I found anything worth reading either for the technical content, or in regard to readability.  (Yes, those who have read my book reviews will know that I don’t think highly of all technical books, but sometimes I do find one worth reading.)  And, in fact, reading the books by professional authors who base their text on “as told to” information from those on the dark side gets to be very boring as repetitive as well.

Writing is a skill, and not everyone can do it well.  Teaching is a skill, and not everyone can do it well.  (Presenting at conferences is a slightly different skill and, as anyone who has ever attended a conference can tell you, not everyone can do it well.)  Both writing and teaching require, as well as certain technical competencies, a feeling and empathy for a large and often ill-defined audience.  Since criminal hackers have clearly demonstrated, by their actions (and continue to demonstrate, in subsequent interviews long after their intrusions, conviction, and even release), a lack of consideration for their victims, it is unlikely that they would make good teachers.

Or conference speakers.

Share

Take it underground

This post was written because a very good friend of mine asked me to send them a mail about decent reasoning to use Tor, and explore the Onion net, so thank you (you know who you are), and this post will be followed by another more detailed post on the Onion net soon.

Okay, so with all that’s been going on in the world lately, I’m starting to think that we should really start moving things underground, by underground, I mean that we should start encrypting our traffic more, and making use of the means that we have available to us, and helping to support them more as a security community.

The things in the world that I’m referring to are not only UK based either, here are a few examples:

Pirate Bay – Guilty Verdict

Mobile Phone Tracking

CCTV Cars

Directive 2006/24/EC Of The European Parliament And Of The Council

It seems that we are seeing more and more of the worlds governments moving towards an Orwellian culture, and I for one really don’t feel comfortable operating in this way.

You may be asking yourselves at this point, what can we do to stop this, the honest answer is, really not that much right now.
We can however start to move our information systems somewhere else, somewhere more secure, and we can all help others to secure their online habits by setting up Tor relays.

The more relays the Tor network gets, the better it is for everyone involved, if you can’t configure a relay, or just don’t want to, then if at all possible, please dontate to the Tor project here.

So please people, if you value your privacy at all, please help the Tor project out in any way that you can, even if it’s translating articles.

Below are a few links that you may find useful:

Tor Overview

Volunteer

Download

This may seem like a shameless Tor plug, but I can assure you that it’s not, and I am in now way related to the Tor project at this point in time, but I really feel that it’s an extremely worthwhile project, and I plan on getting a lot more involved. This project has come a long way in the 2 years that I’ve been using it, and the more users we get contributing the better the anonymity and speed gets.

Keep it safe and private people.

Share

Hiring Hackers – as speakers (part 1)

By the time you read this, CIO magazine will probably have already done its “In Cloud We Trust” Webcast.

The ISSA, ready to provide links to any security related activities, inadvisedly advertised the Webcast.  I say inadvisedly, because the Webcast, or at least the promotional material, features Kevin Mitnick.  This juxtaposition created a bit of a furor over the fact that a prestigious security institution was promoting a former computer criminal.  (It is entirely possible that Kevin Mitnick rather enjoyed the discomfiture of ISSA, since ISSA had the affrontery, in 2003, to turn down Kevin Mitnick’s application for membership.)

All of which sparked yet another debate, in at least one venue, over the advisability of hiring or attending to (for the purposes of security), those formerly convicted of computer crimes.

Feelings are strong, and tempers rather short, when this topic comes up for discussion.  Passions are surprisingly high on both sides of the debate.  However, I would like to attempt to present some opinions on the matter.

(I’m not going to speak about the Webcast itself.  As chance would have it, I’ll have to be getting on a bus at about that time in order to go downtown.  To speak to an ISSA meeting.)

Those who feel that hackers can and should be hired suggest that those best qualified to protect systems are those who have broken into them.  We, in defence of our systems, should not let foolish moral quibbles stand in the way of gaining the best information and advantage that we can.

I am on the side that opposes the use of former criminals.  I do not disagree with the risk management analysis of those on the pro side, but I feel that it is based on faulty assumptions.  My objections to the hiring of hackers are practical as well as moral, and, in terms of ethical analysis, lies in the area of practical morality.

In order to address the practical issues, I have to clarify, and separate, the different types of help we think we are going to get from cybercriminals.  Do we employ them for security management and administration?  Do we hire them for penetration testing?  Do we use them as security consultants?  Or do we just listen to them in seminars, webcasts, and conferences?

This last is the most difficult to oppose.  What is the harm in listening?  Should we not take every opportunity to learn all that we can about security?  Why block ourselves off from an important source of information?

So, I’ll address this first.

What is the harm in listening?  Well, we aren’t just listening, are we?  First off, most “reformed hackers” aren’t exactly doing this out of the goodness of their hearts.  Those who are on the lecture circuit generally make pretty good money out of it.  A lot of them make more than most legitimate security researchers, analysts, and consultants.  Then there are the spin-off benefits in books, workshops, and just plain advertising for John Q. Hacker’s Security Consulting.

Money isn’t the only benefit, though.  I’ve always been interested in the social side of technology, and for more than twenty years I’ve been studying those on the dark side.  Most of these people are charter members of Egos-backwardsR-Us.  Not all of them, but certainly enough to make it pretty much a defining characteristic.  Given a choice between money and a chance to grab the limelight, they might have to stop and think about it.

Regardless of whether we are paying cash or just stroking egos, one thing we are definitely doing is tacitly promoting the importance of what they have done.  We are saying that it is better, in the sense of obtaining security information, to break into systems than to study in other ways.

And I’ll address that later.

Share

US Congress PCI hearings

What could be worse: a vague and hastily thrown together mashup of security protections masquerading as a security framework or standard, or having the government get into the act?  Now you don’t have to choose: you can have the worst of both worlds!  Follow the US Congress hearings on PCI!  Or, follow the commentary into the hearings on Twitter (which is fairly random and noisy, but probably makes just as much sense).

Share

If Cane Toads, why not computer viruses?

Those in the Australian state of Queensland are having a cull of cane toads, a pest.  I don’t know whether it would work, but the mass reduction of a pest population is, generally speaking, a good thing.  It may not eliminate the problem once and for all, but a sharp decrease in population is usually better than a constant pressure on a species.

So, is there any way we can get some support going for a mass cull of computer viruses?  Most currently “successful” viruses are related to botnets, and botnets are often used to seed out new viruses.  Viruses are used to distribute other forms of malware.  Doing a number on viruses would really help the information security situation all around.  (I have, for some years, been promoting the idea that corporations, by sponsoring security awareness for the general public, would, in fact, be doing a lot to reduce the level of risk in the computing and networking environment, and therefore improving their own security posture.)

Share

Major Browsers Pwnd

0day exploits for Internet Explorer, Firefox, and Safari were used to own machines at the Pwn2Own contest @ CanSecWest 2009. Is now the time for someone to port Windows 3.1 to MIPS and install a good telnet client? Roffles.

Credit www.dailygalaxy.com for the fierce FF/IE photo :)

Share

Exploiting our security models?

I’m sitting in CanSecWest.  We’ve just had a talk on platform-independent static binary code analysis.  (It isn’t really platform independent: just translating from specific instruction sets.  Not that it isn’t cool: REIL is a sort of RISC version of an assembly version of pseudocode.)  The presentation, and what they’ve done so far, is fairly abstract.  They are approaching the analysis with a type of Turing machine, and, with a sort of lattice-based state machine model, hoping that the transforms they can see with their model, are close enough to what the actual program will do in an actual machine in order to tell you if there is teh possibility of a bug or an exploit.

So, it’s kind of complex.  We are applying some highly abstract, theoretical stuff, pretty directly to the real world.

Now, in the abstract world, it’s been more than 25 years since Fred Cohen proved that this type of thing will never completely work.  Either you are going to get an infinite number of false positives (false alarms, where you spend time chasing down problems that aren’t problems), or an infinite number of false negatives (which is our current situation with security: our tools aren’t telling us about the problems that do exist), or both.

(One of the authors responded to this point that he chose to err on the side of false positives.  A reasonable position if you are doing research.)

However, this system is so complex that it got me thinking: they are hoping that the model and transforms they have put together is close enough to reality that it will give them useful results and help, but they really don’t know.  What if we are now to the point where our security tools and models, themselves, have gaps that can hide problems, and be exploited?

(There was a reason the original security models were so simple …)

Share

Everything new is old again – vulnerability management

Yes, we have to know, and assess, and analyze, and manage, vulnerabilities.  Yes, it is a complicated task.  So now this is the next big thing, is it?

Well, when you look at it, it is the same task we have always had to do under the name “risk management.”  Except that it is in a smaller compass and more limited extent and application.  Nothing particularly wrong with concentrating on one aspect at a time–as long as you realize that is what you are doing.  Not thinking that you are somehow seeing something new.

Share

Exploits of the Week #4

Megacubo 5.0.7 Download & Execute Remote Exploit

JJunior

PHP GD Library Information Leak Exploit

Hamid Ebadi

Destiny Media Player 1.61 “lst file” Local Buffer Overflow Exploit

Encryt3d.M!nd

VMware Remote DoS Exploit

Laurent Gaffie

Konqueror 4.1 XSS & Crash Exploits

staker

Share

Exploits of the Week #3

Amaya Web Browser

SkD

FreeBSD 6x/7 protosw kernel Local Privledge Escalation Exploit

Don “north” Bailey

Doop CMS CSRF/Upload Shell Remote Exploits
x0r

Ultimate PHP Board

athos

Google Chrome Browser Remote Parameter Injection

Nine:Situations:Group::bellick&strawdog

Share

Exploits of the Week #2

barracuda spam firewall

Internet Explorer 7 XML Buffer Overflow ‘All-In-One’ Exploit

krafty

MS SQL Server Heap Overflow Exploit

Guido Landi

Barracuda Spam Firewall SQL Injection

Marian Ventuneac

CUPS pstopdf Filter Local Exploit

Jon Oberheide

Coolplayer Local Buffer Overflow Exploit

r0ut3r

Share

Metasploit’s Decloak, v2

metasploit

Metasploit Decloak in back online. Decloak (v2) now identifies the real IP access of a user using a slick combo of “client-side technologies and custom services”. v2 also works regardless of the user’s proxy settings. The only public technology that it cannot get through is a PROPERLY CONFIGURED Tor+Torbutton+Privoxy setup, HDM mentions.

You can read more about it and if you haven’t already, give it a whirl.

Share

Fuzzing’s Impact on Vulnerability Discovery

fuzzing

I just seen the new advisory for Opera, headlining a ‘memory corruption’ vulnerability that sounds like its triggered by specially crafted html construction, that is gathered from this almost incoherent ‘detailed’ description of the bug:

“Certain HTML constructs affecting an internal heap structure. As a result of a pointer calculation, memory may be corrupted in such a way that an attacker could execute arbitrary code.”

I often wonder when I see advisories like this if the vulnerabilities have been found by fuzzing.

Another bug found in Adobe Flash Player that I also discuss here, found by iSEC, looks also to be found by fuzzing, but more (nearly directly) implied in the advisory.

“iSEC applied targeted fuzzing to the ActionScript 2 virtual machine used by the Adobe Flash player, and identified several issues which could lead to denial of service, information disclosure or code execution when parsing a malicious SWF file. The majority of testing occurred during 120 hours of automated SWF-specific fault injection testing in which several hundred unique control paths were identified that trigger bugs and/or potential vulnerabilities in the Adobe Flash Player. Paths leading to duplicate issues where condensed down to a number of unique problems in the Adobe Flash Player. The primary cause for these vulnerabilities appears to be simple failures in verifying the bounds of compartmentalized structures.”

Now, both of these examples could have been found by other means than fuzzing, but I know every time I see scrupulous advisories like those it just makes me wonder. By the way, IMHO Fuzzing: Brute Force Vulnerability Discovery is a great book and a great read. Kudos to the swift, engineering authors as well.

You can browse a list of fuzzers hosting by PacketStorm to exercise your mind even more.

So what do you think? Have fuzzers, being at the most ‘trivial’ to write in ideal conditions (well documented protocol, continued aggressive latency, etc), taken a strong hold in many security researcher’s work?

Share

Convenience charge?

I’m sure all of you have Ticketmaster horror stories.  (Anyone who has ever bought a ticket through Ticketmaster, that is.)  I needed to get a ticket to an event last night.  As usual, the only way to get it was through Ticketmaster, and, as usual, the entire process was annoying from beginning to end.

As I was paying, I was noting the various extraneous charges that increased the price from the face value (which includes the tax, of course) to roughly 25% more than that.  The one that struck me was the “convenience” charge.

Convenience?  Convenience?  Who decided this system was convenient?  I’m old enough to remember the days when you called, on the phone, and got an actual person, who was associated with the group, or at least the theatre itself, and could tell you what tickets and places were available on what days.

OK, I’m old.  But leaving aside issues of efficiency and greater profit margins, what was the person smoking when they decided that this system was convenient?  In order to find decent tickets, at decent prices, I had to look up, individually, every single performance, and then search for tickets, separately, in each price range.

And, of course, every time I searched for tickets (I wasn’t told what tickets were available, mind you, no, the system decides what tickets it’s going to offer me), I had to go through the ReCAPTCHA process (which we were just discussing here).  And, as my wife, looking over my shoulder as I went through this delay every single time asked, why?  It certainly doesn’t provide any security at all.  Yes, I know that you are only supposed to get one of the words right, but I’m fairly certain that, in all the queries I did on the system, there were a few where I got neither of the words right.  (Several of them were just blobs.)  So why is it there?  I suppose it is partly security theatre, and partly it is so that Ticketmaster can get a little goodwill for supporting the book transcription project.  (Of course, Ticketmaster isn’t supporting the project, you are, whether you want to or not.)

Spare me from convenience …

Share

Everything new is old again – baked in security

Now, believe me, I have only the greatest of sympathy with the intent of this phrase.  Yes, I agree that we’ve been hamstrung and hampered by insecurities due to sloppy programming, and we desperately need to have more secure software development practices.

It’s just nothing  new, that’s all.

I mean, we’ve been preaching this for years.  Decades, really.  Ask any old programmer what he, she, or it was taught way back in the old days.

Structured programming.  Top-down programming.  The waterfall method.

And documentation.  I especially like internal documentation.  If you don’t like documentation you can have a moment of pity for my (occasional) programming students.  When they hand in a project it has to have internal documentation in the source code, and it has to be clear and make sense.  (They lose marks if they don’t and it doesn’t.)  As far as I’m concerned, if you can’t say what you are doing, you don’t know what you are doing.

And if you know what you are doing, you do it right.

Share

DNSSolutions

evilgrade

The flaw discovered by Dan Kaminsky put a forthright scare into the entire internet community — and it should have. This attack, which is trivial in nature, could make the difference between sending all your private data to the secure server across the ocean, or to a happy hacker filling his/her eye balls with goodies.

But now, since everyone was woken up, there are two mainstream, proposed solutions in hopes of ending the insecurity in DNS: DNSSEC and DNSCurve. Which one should you bet your network’s integrity on? Better hope your patched or you might get bailiwicked. Let the enlightenment begin.

DNSSEC, or Domain Name System Security Extensions, is a suite of IETF specifications for securing certain kinds of information in DNS. Recently, lots of companies have been gearing up to implement DNSSEC, as a means of securing DNS on the Internet. One man, that opposes DNSSEC, has written his own code to provide a nicer, more secure solution, and far better than DNSSEC. He calls it DNSCurve.

DNSCurve uses high-speed, high-security elliptic cryptography to improve and secure DNS. Daniel J. Bernstein, the creator of DNSCurve and many other high security servers such as qmail and djbdns servers, doesn’t want DNSSEC implemented, but DNSCurve instead. And it is no question which one is the better choice after looking at the comparisons Bernstein makes between the two now rivals.

Some huge advantages with DNSCurve vs DNSSEC are encrypting DNS requests and responses, not publishing lists of DNS records, much stronger cryptography for detecting forgeries, (some) protection against denial of service attacks, and other improvements.

There is one quick, unrelated issue that I disagree with Mr. Bernstein about. After offering $500 “to the first person to publish a verifiable security hole in the latest version of qmail”, he states: “My offer still stands. Nobody has found any security holes in qmail”. But in 2005, Georgi Guninski found one and has confirmed exploitability on 64 bit platforms with a lot of memory.

Bernstein denied his claim and then stated “In May 2005, Georgi Guninski claimed that some potential 64-bit portability problems allowed a “remote exploit in qmail-smtpd.” This claim is denied. Nobody gives gigabytes of memory to each qmail-smtpd process, so there is no problem with qmail’s assumption that allocated array lengths fit comfortably into 32 bits.”. Now, to me, and I am sure to many other people as well, an exploitable bug in an exploitable bug. Conditions have to sometimes be met and “can be carried too far”, one might put it, but in this case, it is clear that Guninski found at least one exploitable bug in qmail. Game over. No disrespect to Mr. Bernstein or his code; he does have both great code and concepts. On with my main literature.

So, if I were a betting man (and I am), I would gamble on Bernstein’s all around great approach to making DNS safer, more resilient against attacks, and definatly more secure. Hopefully, people will realize money can’t solve all our problems, but the guys that know what they are doing, can, and might just make some things happen pretty soon.

Share