Using time travel to detect vulnerabilities on web servers.

If I’m testing a web application for a company, there are some bits of information that I’m willing to pay for

1) I want to know all old apps which used to reside on a server but now are no longer *linked to* via that server. e.g. Did bigcompany.com have an old cgi-bin application called process_orders.pl which handled credit card info? Is that cgi-bin application no longer linked to anywhere on bigcompany.com? I want to know that so I can check for the existence of that app (sorry, but web developers err on the side of laziness…they’ll remove the link but they will often leave the app sitting in it’s original location). WayBack machine has this information. Google often has this information. Someone needs to package it up and sell it. Then, I can feed this information to my tool that looks for filename, filename.backup, filename.orig, filename.bak, etc.

2) I want to know when web forms changed input parameters but still post to the same backend processing script. It should be obvious why I want this, but I’ll belabour the point for a minute. Developers never delete code ;-) , instead they just write a new class or function and call that class or function. Knowing old inputs can put the pen-tester into a position where they can take old functions or classes for a spin.

3) Cookie format changes. Similar to 2, just use old cookies instead of old POSTS. Did an old cookie have string ‘TEMPUSER=BOB’ but newer cookies have ‘TEMPUSER=NULL’….hmmm.

4) Patch history of the server. I can get some of this from Netcraft.

5) All of the google-hacking, GHDB stuff run and packaged up for me. I have 7 Google API keys. That gives me 7,000 queries per day. It’s not enough. I really don’t want to bother with it and would pay for the nicely-formatted results. I wrote a python program to run the queries directly via a google web engine. However, google easily caught my lame attempts and warned me that my usage was inappropriate. Yes, I could write a tool that used full headers (like a browser), re-used cookies, slept for a rand() amount of time between queries (like a user would), etc. etc. But, that’s a pain and I’d rather just pay for it.

Lastly, and completely unrelated to all the stuff above, if you’re writing a web scanner that spiders and indexes site links, please do a full protocol analysis. Too many web scanners do line-by-line, regex-based analysis. So, if an HTML comments starts on line 3, there is a href link on line 5, and then the comment span ends on line 7, the stupid scanner peruses that link as if it was a normal link and never reports (as it should) that there was a *commented out* link (of much more importance, imo).

!Dmitry

Share

SinFP, TCP options, and tool redux

SinFP is the shiznet. Why?
0) map out open firewall ports to backend hosts. This is very nice.
1) only single open port required (how many times do you actually get CLOSED and OPEN ports in a pen-test against a hardened server….uh, never.
2) fast (3 packets)
3) relatively stealthy (how many IDS engines are flagging on valid SYN packets with valid options)

IMO, there are 2 major players in the OS fingerprinting space. Namely, nmap and xprobe2. I’m not gonna waste your time running tests against tons of different boxes. Let’s just run the 3 tools against my win2k3 SP1 server which is running without any firewalls (for the benefit of this test)…

First up, we have the incumbent champion nmap. Nmap steps up to the plate…there’s the pitch…the swing…
OS details: Microsoft Windows .NET Enterprise Server (build 3604-3790)

Ouch. Swing and miss. Nmap is batting .000

Next up is xprobe2. The swing, swing, swing, swing, swing, …, swing
[+] Host 10.10.10.8 Running OS: “Microsoft Windows 2003 Server Standard Edition” (Guess probability: 100%)
[+] Other guesses:
[+] Host 10.10.10.8 Running OS: “Microsoft Windows XP SP2″ (Guess probability: 100%)
[+] Host 10.10.10.8 Running OS: “Microsoft Windows 2003 Server Enterprise Edition” (Guess probability: 100%)
[+] Host 10.10.10.8 Running OS: “Microsoft Windows 2000 Workstation SP2″ (Guess probability: 100%)
[+] Host 10.10.10.8 Running OS: “Microsoft Windows 2000 Server Service Pack 1″ (Guess probability: 100%)
[+] Host 10.10.10.8 Running OS: “Microsoft Windows 2000 Server Service Pack 4″ (Guess probability: 100%)
[+] Host 10.10.10.8 Running OS: “Microsoft Windows NT 4 Workstation” (Guess probability: 100%)
[+] Host 10.10.10.8 Running OS: “Microsoft Windows NT 4 Workstation Service Pack 4″ (Guess probability: 100%)
[+] Host 10.10.10.8 Running OS: “Microsoft Windows NT 4 Server Service Pack 1″ (Guess probability: 100%)
[+] Host 10.10.10.8 Running OS: “Microsoft Windows NT 4 Server Service Pack 5″ (Guess probability: 100%)

That’s just ugly. 10 swings and one foul tip. Xprobe2 is batting .050.

Next up is the relative newcomer, SinFP. Here comes the pitch…the swing:
IPv4: HEURISTIC0/P1P2P3: Windows: Microsoft: Windows: 2000
IPv4: HEURISTIC0/P1P2P3: Windows: Microsoft: Windows: 2003 (SP1)

SinFP bats .500.

Playing with sinfp made me curious (again) about how devices handle bogus options data. It had been a few years so I wrote a quick script that ran 7 elementary tests:

0) Options section is limited to 40 bytes…Let’s go past that boundary.
1) Options are of format [Kind][LEN][values]. Give a bogus LEN byte
2) Insert arbitrary EOL
3) Fiddle around with the reserved KINDS (27-255)
4) Runts with bogus TCP offset
5) Giants with bogus TCP offset
6) replay options (i.e. keep repeating the same KIND,LEN,VALUE until we get close to 40)

While testing, I segfaulted a very, very popular open source IDS package. I leave the exact option packet and the buggy software as an exercise to the astute reader ;)

Other ‘devices’ have similar difficulties.

Fyodor’s top (50 * 2) is out. Is it just me, or has not much changed over the years?

Peace be unto Ye

!Dmitry

Share

Riemann, an engineer, and Van Gogh walk into a bar…

I was building some chicken and rabbit coops the other day. The object of the endeavor being to keep the snakes, rats, foxes, racoons, etc. away from:

1) my animals’ food supply (my money)
2) my animals (my food), and
3) the by-products of my animals (again, my food).

Building the coops was like building a defensible network. This got me thinking about the differences between a mathematician (a theorist – aka me), an engineer (an applied theorist), and an artist.

My finished work looked like what a blind man would create given a chainsaw, plywood, a stack of 2×4′s and a nail gun. Nothing was plumb, only the internal flooring was level, liberal use of poultry netting was required in order to shore up gaps, etc. That’s what you get when a math guy builds something. Production ceases when functionality is addressed. Period.

My brother is an engineer. His chicken coops, dog pens, horse fences, etc. are all optimal. “Optimal” being the key word. If he needed to, he could expand any one of his habitats. I, on the other hand, would have to rebuild mine from scratch if I ever needed more space.

My other brother is an artist. If he built a chicken coop, it would be a beautiful thing to behold but wouldn’t contain any tar because it didn’t go with his color schemes. It wouldn’t have front steps because that would detract from the ornate design of the front door. Poultry netting would be verboten… In short, his animals would all die horrible deaths in beautiful surroundings…and then, he would write a poem or short story about it while crying over a bottle of wine.

Lessons learned.

1) Theorists can be used in a limited capacity when building a defensible network…they should not be depended upon to deliver scaleable, optimal products, however. Idea guys shouldn’t be implementing their ideas.

2) Engineers design and implement solutions which the idea guys throw at them.

3) Artists should never be used until the engineer is done. Sorry. Artists take the well-engineered framework and make it look like the Taj Mahal.

So, when creating defensible networks (or software that protects defensible networks) you should plan on dividing your labour into at least 3 categories

A) Idea guys come up with the great ideas. These guys could be mathematicians, philosophers, jazz history majors, marketers, whatever. The net result of the idea guys is…(wait, here it comes)…unique ideas.

B) Engineers validate that the ideas can be implemented and then build it.

C) The artists come in and make the engineers work look a lot better. The Marketing guys will appreciate this.

With only A & B, you get products that the ‘techies’ will love, but will never get approved through a Corporate budget committee (think CANVAS). This product won’t sell very well…sadly.

With only A & C, you get products that the ‘techies’ will love initially and that the Corporate budget committee will approve. However, the techies will quickly learn to abhor the product as it doesn’t scale…and then, some artist will find a buffer overflow in the security product and that’s that (think ISS). This product will sell (sadly). However, the company will often have to ‘reinvent’ themselves…And, by ‘reinventing themselves’ they actually mean ‘We screwed up initially so we’re just scrapping the whole thing and starting over but please keep paying on that support contract because you’re gonna *love* our next version’

With only B & C, you don’t have an ‘idea’ to begin with…so, you end up with well-engineered, snazzy products that don’t really do anyone any good (think IPS). Don’t underestimate the persuasive powers of marketing artists…this product will sell ;)

Peace be unto ye,

!Dmitry

Share

All I want for Christmas is this Google feature

Here is what I want for Christmas. Yes, it’s only May, but I figure I had better get my requests in quick ;)

I want to be able to search for binary files by checksum. Simple, right?

A few applications where I would find this useful:

1) I pay some creative genius thousands of dollars to create my company icons (or web page, or whatever). I’d like to checksum those files and google for pages which have image files which match that checksum. I’d also like to know when Google first noticed that file on website X (i.e. did I receive stolen icons or were my icons stolen?).

2) OK, I really only have one reason for requesting this new feature…however, I bet spooks worldwide could make use of this as well. Inject some kiddy pr0n onto a P2P network and then use your BigBrotherIsh passive sniffers to see who/where your pr0n ends up.

That’s it. I’d like to thank Google for considering my request ;)

!Dmitry

Share

Enron – the pain keeps coming

Note: I posted this to slashdot along with proof of the Private data. It has not yet been approved.

A year (or more) ago, a large batch of Enron emails were released to the public. This data set has been very useful from a ‘Research’ perspective. Just this weekend, I was using it to test the speed of PCRE vs Python vs Perl…until I happened upon a little nugget of information which led me to look at the dataset from a Security/Privacy perspective.

It appears as if data is included within these emails which violates individual Privacy. The data includes, but is not limited to, Account information to non-Enron applications (FTP login credentials, web credentials, etc.), Parent-teacher school data, private residence addresses, private residence phone numbers, Names and Social Security Numbers, and more.

Where did the Enron emails come from? The United States Federal Energy Regulatory Commission. That’s sad.

Some examples (I stripped out the SSN or Credit Card number with X’s, and changed the name/address):

A Social Security Number

To: Patti Thompson/HOU/ECT@ECT
cc: Sally Beck/HOU/ECT@ECT, Shelly Jones/HOU/ECT@ECT
Subject: Summer Intern Information

Patti:

The following intern will be in Sally’s department this summer:

Name Start Date SS#

Jane Doe May 22, 2000 XXX-XX-XXXX

Please let me know the CO# and Cost Center#.

If you have any questions, I can be reached at x35850.

Thank you.

-sap

Another Social Security Number

From: christina.valdez@enron.com
Subject: Tom Hopwood
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Badge #15518 – SS # XXX-XX-XXXX

A Credit Card purchase

Date: Thu, 10 May 2001 08:07:00 -0700 (PDT)
From: john.arnold@enron.com
To: ticketwarehouse@aol.com
Subject: Re: eBay End of Auction – Item # 1236142249
Mime-Version: 1.0^
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

199.99+ $18 o/n shipping = 217.99

Visa 4128 XXXX XXXX XXXX exp X/XX

shipping and billing address :
John Arnold
XXXX XXXX XX
Houston, TX 77002
XXX-XXX-XXXX

Share

A few humble observations regarding the current state of InfoSEC

Steven M. Christey cross-posted these questions to a bunch of InfoSEC lists. Here’s what I think.

1) What is the state of vulnerability research?

It’s piss poor, on average. There are a few standouts but, for the most part, it’s largely comprised of blowhard, grandstanding fools.

2) What have researchers accomplished so far?

There are a few standouts who are doing unique research and there are a bunch of brokeback-hack-alongs who content themselves with drilling little holes around the big holes already drilled out by the aforementioned standout researchers. What do I mean by this? OK, one researcher decides to go in and really figure out what makes PROTOCOL_X tick and writes a nice tool that automates this testing. Then 550 researchers snag the free version of the tool and find 50 related bugs. They don’t care that all they did was crank up a GUI tool that only required that they point it at an IP and click ‘start’ – they’ll happily prance and parade around their trivial little lemmas like they just did something.

3) What are the greatest challenges that researchers face?

It would *seem* that their greatest challenge is getting their names published. These ‘researchers’ often bitch and whine because vendors don’t “take security seriously” and won’t “release a patch for this G-dawful flaw that I just found”, etc. This can all be loosely translated to “They’re not paying attention to me, publishing my name and showering me with accolades”.

4) What, if anything, could researchers accomplish collectively that they have not been able to accomplish as individuals?

I don’t know and I don’t need to know because it’ll never happen. Even (especially?) dummies are smart enough to realize that the larger the group of researchers, the higher the chance that they end up on the short end of Zorn’s lemma.

5) Should the ultimate goal of research be to improve computer security overall?

No! Let’s be honest here. The good researchers do *it* because *it* is in their blood. Hacking is an ART FORM, and if you don’t get that, then you won’t get anything else here today (or any other day). Hackers (let’s quit calling them ‘Security Researchers’ shall we? Good Security Researchers are hackers and hackers are artists, so says I) were born figuring out how scheisse works and they just happen to work with the MEDIUM of internet technologies. Tag research with some altruistic goal and they’re prone to go apply their talents elsewhere. You may be saying “but so-and-so has stated that the goal of all their research has been to increase computer security for mom, pop, and little Tommy…blah blah”. So-and-so is lying or they are already independently wealthy from all their previous research and now deceive themselves that their goal was something other than ‘scratching an itch’ or making money.

6) What is an “elite” researcher? Who are the elite researchers?

I think I already alluded to this in item 2 (see the researchers doing unique research…). They are kind of like the mafia. If you live in the neighborhood, you know who they are.

7) Who are the researchers who do not get as much recognition as they deserve?

I don’t know. I’m sure there are some but if they choose not to be recognized then leave them at that. Researchers get as much credit as they want. Even the brokeback-hack-alongs get recognition. The true artists who don’t get recognition do it by choice, imo.

Share

Researchers and Wiki and Pederasts – Oh my!

It’s a busy month and only getting busier but I wanted to post some random crap that’s been semi-bugging me. Sorry I don’t link to the actual articles below. Google is your friend.

1) Driving a car doesn’t make you an engineer. Similarly, running someone else’s fuzzer doesn’t make you a researcher. The rash of LDAP flaws released by [*cough*] security researchers is coming. In fact, it’s already started. In related news, an LDAP-fuzzing script was recently posted on Ptacek’s blog. So, there you have it. I’ll just call these people Script-Researchers. I should take a few minutes and really bust on some folks, but I’m kinda supposed to be working right now…maybe some other time.

2) Nice ICMP Kernel bug! I told you the layer 3/4 bugs still had some play ;)

3) Nice FreeBSD layer2 wireless bug. I called it. I rule. And, there’s still more. Go get ‘em script-researchers! (oh…wait…no one has released a script yet…never mind)

4) Beijtlich and Ptacek are having a little bit of a discussion regarding wikipedia futures. Beijtlich claims that a wiki-worm which alters content will kill wikipedia (or something like that, I’m just paraphrasing). Ptacek says that’s a bit extreme. No one really cares what I think, but here’s my take on it.

a) as wiki becomes more and more popular, the user to maintainer ratio increases in favour of the user.
b) it only takes one incorrect entry (incorrectly acted upon) to turn a user away from wiki
c) wiki will become an untrusted entity and people won’t use it for any sort of serious research.

5) Dateline did another installation of ‘Catch the Pederast’ starring the folks from perverted-justice.com. Funny stuff which underlines the problem of the Internet giving safe harbour (or yahoo group membership) to every and any class of whack-job. If you’re an 18-year-old Slovakian with a penchant for licking yellowed toenails while wearing a thong around your head, you can meet a fellow stub-licker from Peoria Illinois via Al Gore’s baby. Remember how airplanes were supposed to bring about world peace because now people could travel to all parts of the world? Remember how the Internet was gonna bring people together in world peace? Reality is a cruel mofo. But hey, we’re all getting rich off it. Go us!

Share

The Human Stain

The world turns and changes,
but one thing does not change.
In all my years, one thing does not change,
However you disguise it, this thing does not change:
The perpetual struggle of good and evil.

-T.S. Eliot

I re-rented _Der Untergang_ from the video store last night. I’ve seen it twice before but this was the first time I was able to watch it end-to-end. What does that have to do with morals or information security? To summarize, the Nazis worked out their own moral code and then enacted and followed through with it. These morals weren’t adopted overnight. Instead, they were gradually *accepted* over the course of years. In one chilling scene, Goebbels, regarding the German people, states: “They gave us a mandate, and now their little throats are being cut.”

Recent incidents between large U.S. companies and oppresive governments have raised some interesting moral questions. History has shown us that man-made morals can lead to the deprecation of human rights. The time to stop the logical conclusion of a faulty moral system is during the CREATION of the moral system. We are the builders and policy enforcers of much of the Internet. Given this, our actions (or inactions) are actively shaping what will become the *working* moral code. With respect to Human Rights, many security researchers are contributing their time and efforts (their mandate) to enhancing the Human Rights of oppressed individuals (TOR, for example). Some of us are doing nothing.

One of Google’s 10 Philosophical tenets states: “You can make money without doing evil”. That’s true, but it doesn’t really imply anything. Better would have been: “We will strive to only do good”. The choice to do evil or good is an individual one. As security professionals, we are in a position to take actions which will do more than just pad our wallets. We are a vocal and highly visible group. We *can* make a difference. If you don’t already have one, consider investing your time or resources to a Human Rights project.

!Dmitry

Share

I’m not me and that’s OK.

(in keeping with my ‘purging’ theme, I’m gonna release old blog posts that I meant to come back and clean up. These are just scattered remnants of long-gone ideas…)

There seems to have been a movement, of late, which discourages the use of pseudo-identities (or masks). Despite the enormous work being done to ensure anonymity for the masses, there seems to have been a shift in expectations. I don’t know why this is. The arguments *for* masks are pretty easily laid out. For example:

“Man is least himself when he talks in his own person. Give him a mask, and he will tell you the truth.”
-Oscar Wilde

“We created open circles that allowed us to don and discard different masks at will — and we could dance behind these masks without fear of discovery or ridicule.”
-NMRC

Good and Evil bounds me above and below and how I feel or act at any given moment is rarely either extreme. I am capable of everything, nothing, and all points in between. I am continuous – White, grey, and black being only 3 colours on a palette of infinite possibilities. I am free to tell the TRUTH or LIE. Conform or rebel. Offer up the hilt of my sword in friendship or bury my blade in an unsuspecting soul. I am, in a word, free

!Dmitry

Share

How to get a job with pen-testing team.

It’s cold and gloomy outdoors. I’m feeling pretty faded (errr, jaded) right about now. I’m sure all you corporate hangers-on have seen the Big-whatever companies come in with their pen-testing or audit teams. Some of them call themselves pen-testing, some Tiger, some white-hat hacker, whatever. They should just state that they are inept p0sers. But, that gets me thinking (on just such a day) what it would take to get hired at one of these Big-whatever companies. So, without further adieu:

Rule 1 – You can’t run Windows. Seriously, don’t even consider showing up to a Con|interview|class|etc with Windows. Even if you have to run a CD distro, or OpenBSD at runlevel 3, you must do it. You will be scoffed at and not taken seriously with a Windows machine. For bonus points, put con stickers or anti-microsoft stickers on the laptop. You get extra bonus points if you’re running a MAC. Just pull up Safari and browse over to slashdot. Yeah, you’re rolling hardcore now.

Rule 2 – You must have complete and utter disdain for any authority figure. You’re the rebel – the misunderstood creative genius. Act the part.

Rule 3 – You must be a coder of some sort (‘Hello world’ is sufficient). Ruby and Python are pretty cool right now. C is an old standard and always well respected. If you’re running one of those GUI APIs that really makes things much easier, STOP. It’s not cool. gcc or death.

Rule 4 – You’ll have to be a Goth, punk, or (less bonus points) a long-hair. You must dress and look the part. Yes, Dave Aitel showed up to Defcon wearing a shirt and tie…but, hey, he’s Dave. If you’re not Dave, you have to look like a meth junkie, sorry. There *are* bonus points for piercings and tattoos.

Rule 5 – On some elite mailing list, you must have gotten a wink (both ‘;)’ and ‘;-)’ are acceptable) from some security guru. !wink == !cool (incidentally, I just satisfied rule 3 – Go me!)

Rule 6 – You must have a ‘Niche skill’. Not only must you have the niche skill, you must talk about it a LOT. Certain skills are worth more than others, so I’ll do a quick rundown on which skills generate the most bonus points. If it’s not on this list, then it’s worth negative points and you should avoid it at all cost.

Reversing – Crank up IDA Pro, put on that “I’m so busy doing really, really important reversing that you dare not ask me any questions” look and watch those bonus points ROLL IN!

Writing exploits or shellcode – Still very cool. Try to be seen with either a .s file open (use vi editor, don’t make the mistake of using emacs or pico or, G-d forbid, a GUI editor) or gdb. In a crunch, you can have a .c file open, but don’t make it a habit. You’ll need to work on that “don’t bother me look”, lest someone ask you wtf you’re doing.

Fuzzing – Do NOT tell anyone that you use a commercial or open-source fuzzer. That’s like -500 bonus points. No, my friend, you write your own fuzzers. “Yeah, cuz like, SPIKE wasn’t doing enough pairwise-relationships between parameters so I had to like, write my own fuzzer that took advantage of like binary relations across multiple fields and stuff and like, I’d explain it to you but it’s really complicated and like …” ad infinitum.

TCP/IP Ninja – Really low on the spectrum. It used to be really cool but now, unless your name is Kaminsky, you’re not really getting much spin with this one. Maybe when people figure out that there are still bugs to be found at layers 2,3, and 4 of the stack this will get some rejuvenation…but, until then, I don’t recommend this one.

Rule 7 – You must be the project owner of some arbitrary project… Have some pet project that you supposedly work on all hours of the night. Send out emails at all hours of the night (use cron if you have to) telling your boss that you have a great idea for some cool new reversing/fuzzing/exploiting-shellcode_generating-morphing-inline-tcp-ip-ninja-death-ray machine that you are working on. If they ever ask to see a working demo, take the coders moral high road (i.e. make up some reason why you are so elite that you dare not try the tool until you’ve tweaked out some bugs…or whatever)

Rule 8 – Coherent statements are not for you. That’s right, even if you have to go back and add in typos, do it. I should probably give a few examples.

Bad email – Good evening Mister Jones, I was just working on my project for that Death Ray auto-pen-testing machine and wondered if you had any feedback regarding how we would handle shellcode delivery across SCADA or process control networks. Further, as I am putting in so much time with this project, I may need to be a little late tomorrow morning.

Good email – hey. so, im rewrking the shellcode delivrey mechanism for teh scada and pc networks and if you had anyhthing to add before I commit thes to CVS then can you shoot me an email. I might be in late tomorrow depeending on how son I get thes bugs worked out.

That’s about it. Good luck, I’m sure I’ll be seeing you soon.

!Dmitry

Share

All’s well with 802.11.

So, 802.11 is maturing. Big companies which have avoided being early adopters may be getting ready to roll out their 802.11 networks or RFID. You might want to hold on just a sec.

There’s a mini-thread on the DD about wireless insecurities. That’s cool. However, in my humble opinion, the major flaw with wireless devices is that they are forced into the position of being a network edge device without any hardening whatsoever. Putting an 802.11 card (driver) or an AP at the edge of a network is like putting the Coast Guard on the front lines of an infantry battle. It just doesn’t fit.

Security researchers seem to be interested in drilling little holes around the larger holes drilled so many years ago. That’s typical. However, I don’t expect it’ll be too long before all the encryption weaknesses, hijacks, replay attacks, etc. get played out and folks go after the INHERENT weakness with 802.11. Where do most 802.11 drivers process layer 2 traffic? Think about that. Voltaire said it best:

‘…And as with quaking voice
mortal and pitiful ye cry, “All’s well.”
The universe belies you, and your heart
refutes a hundred times your mind’s conceit’
–Voltaire

Share

US Cert numbers don’t really matter

I read the US Cert ‘year-end’ numbers. I’ve watched everyone and their mother hop up to defend the Open Source side of things…and, at the end of the day, it doesn’t really matter to me.

Here are my *feelings* regarding 2005.

1) There were more bugs; however, most of these were application bugs in 3rd party software that ran on top of the OS and many of the applications were downright marginal. I call these flaws ‘sourceforge newbie flaws’ (or r0t flaws). All in all, I feel as if 2005 was a better year wrt security.

2) It used to be that Windows was inherently insecure and Linux/FreeBSD/OpenBSD/etc were more secure. Now, I feel as if Windows is as secure as *nix. Back in the 90′s, I would spend more time writing a fuzzer than it took to run the fuzzer and find a flaw in Windows. All of the flaws were skin-deep. Now, the windows flaws are more deeply buried (ie much harder to find). It’s getting much, much harder to find flaws in Windows machines. Windows security *is* getting better and will continue to get better. At the same time, Windows functionality continues to grow in leaps and bounds. You can draw your own conclusions.

3) 2005 was a grace period for MAC users. Here’s a prediction, 2006 will be hard on the MAC.

4) Information Security is no longer an infant. The killer apps have been developed, marketed, and sold. What’s left is the ‘professionalization’ of Information Security. In the 90′s (and even into 2005), it wasn’t unusual for the Information Security team to play by a different set of rules (Cowboys?). We moved fast and loose in those days. Those days are, imo, dead and gone.

!Dmitry

Share

Change Control : How does it benefit the Security team and how do you go about enforcing it?

Most organizations should be aware of Change Control (hereafter referred to as ‘CC’), the process of only making system or network changes within a set of parameters. This should, of course, be addressed in your policy. CC has several direct benefits to the Security Team.

By requiring all critical machines to fall under CC, you don’t need a fancy finger-printer to tell you what you are running (and where). In one company where I worked, we had a policy which stated that we (the Security group) could scan *any* non-critical machine at *any* time. Period. We then gave a month for the business units to enroll their Critical machines. We ended up with around 4000 machines (3000 more than we thought we would get). We then had each business unit assign an asset value to the machine. At this point, we had a database of all critical machines as well as the exact asset value, OS, patch version, IP addresses, interfaces, hotfixes, running applications, system owner, hardware utilized, system location, etc. At this point, we hardly even needed our scanner any more. For example, if a flaw comes out in IIS 6.0, we know exactly which critical systems need to be addressed first (see my post on asset categorization and CVSS).

Having a Security Team member attend the weekly CC meetings allows you to have input into proposed changes. If a new business partner connection is coming up over the weekend or a new remote location is connecting into the Corporate network, you now have a SAY in how these changes come about (or even *if* the changes will come about). Too many people rely on their scanners and finger-printers to tell them how the network is laid out. IMO, you can get this by having a good CC process.

By participating in the CC process, you minimize your Risk of being blamed for network or system outages. Believe it or not, scanning critical systems can cause them to fail. The Security Group should only scan critical resources within an assigned CC window. This can stop a lot of finger-pointing.

I’m gonna go way OT for a second. Large organizations often have critical machines which are highly unstable. Not many pen-testers get access to multi-million dollar hardware or software. There are probably a thousand undocumented bugs in HP-UX, DG-UX, SCO, AS/400, AIX, Oracle, SAP, People Soft, etc. As a security team, you can easily fall into the rabbit-hole of communicating and QA’ing each of these flaws to the vendor. I know I’m on a slippery slope here, but you have to PRIORITIZE your time. I have a finite security team with a finite amount of time. I’m sorry that the vendors have written such sloppy code; however, my goal is to protect the companies resources while enabling the business. As I identify flaky applications on critical systems, I put them on a separate network behind a firewall. These devices should, typically, only be connected to by certain machines on certain ports. Let the firewall go to work for you. Minimize the Risk. Have the business unit assume the remaining Risk. Get on with life.

How do you go about enforcing CC? Of course, you are monitoring every critical machine, right? You have designed a network which is defensible, recoverable (that whole DR/BCP thing) and designed around critical resources, right? If that’s true, then you just have to monitor changes. I, personally, will know within 30 minutes if a new service pops up on a critical machine. I’ll know if the critical machine is initiating odd connections out of their allowed range. To enforce CC on your critical machines, you must simply have the ability to detect changes on these systems. And, you must have a compliance team that can address infractions with both the culprits as well as the higher-ranking brass. Believe me, after a few months of ENFORCING the CC, you’ll rarely find changes outside of a CC window. I know this from experience.

I’m gonna go OT one last time. With respect to ‘early adoption of new technologies or applications’, most organizations that I have worked for severely frown on it. Code or applications which have not been extensively tested are rarely allowed to pass through CC. Not being an early adopter can save your butt BIG TIME! Large corporations are like a big ship…i.e. they tend to move slowly and have a hard time quickly reversing themselves [1]. It may take months or years to fully roll out an Enterprise solution. If that solution turns out to be the *wrong* solution, it’ll take at least as long to undo your error.

!Dmitry

[1] actually, the analogy doesn’t stop there…large ships must also be compartmentalized in order to ensure that a single hole doesn’t sink the entire ship…but, that’s a whole ‘nother post :)

Share

Eat your own dog food, or you’ll end up eating…

There’s nothing worse than heading off into battle, singling out a crippled host, attacking, and getting thoroughly routed (errr, r00ted) in the process. How does that happen? Has it happened to you?

My informal (and incomplete) list of reach-around attacks:

1) reverse SPIKING – Named after Aitel’s popular SPIKE fuzzer, this sort of attack targets scanners. Set up a pseudo-service which returns non-standard data. How many scanners take the ‘Server:’ field and display the text string in a report? What if the ‘Server:’ field is a script? What if the ‘Server:’ field is 4097 bytes long? What if the scanner stores the return data in a database and the return data contains specially formatted SQL syntax? Does the scanner cap off their receive buffer? You get the idea. Are these types of attacks popular? No. Are they low-hanging fruit for the security researcher? You bet.

2) Passive device content-parsing flaws – Pretty well documented already. This sort of attack targets network devices which listen for traffic not destined for them. Just send malformed data or packet headers on the wire and watch the IDS/sniffer/router/whatever choke. Snort, ISS, and Ethereal have, in the past, been targets of this sort of attack.

3) Reverse trophy hunting – This attack targets the *individual* who comes back after a successful scan in order to gather trophies (screen shots, deposit a file, etc.). Start with a pseudo service (SSH, telnet, FTP, etc.) with an easily guessed or NULL passwd. The scanner finds the vulnerability and the individual running the scanner comes back to grab their trophy. However, the service isn’t real and aims to exploit the SSH|telnet|ftp|whatever client when it attempts anything more than a log-in. Did the individual come back with an old version of wget in order to grab a screen shot of a vulnerable web app? Did the individual just download and run your ActiveX component because they thought they were logging into a browser-based terminal services session? Did the individual use a vulnerable SMTP client in order to test the open relay that the scanner found? Etc! You get the idea.

4) Sourceforgeish Trojan – This attack targets both security professionals as well as ne’er-do-wells (to a lesser extent). Set up a project on some website. The software contains a Trojan. Now, wait a few weeks and publish a flaw in the software. Wait for all the security researchers to come and download the software so that they can generate signatures for their IDS, exploit code for their auto-rooters, checks for their scanners, etc. Ouch.

5) Google hacking hack – This sort of attack also targets both security professionals and ne’er-do-wells. So, the Security team ran their new google-hacking tool and just found a ‘password.xls’ spreadsheet on your website via a google query. Now, they’re closing in for the kill…except, the xls they just downloaded and are gleefully opening has a Trojan/buffer overflow/malicious macro/whatever. Reversal. 2 points.

6) Fools gold – This sort of attack targets the file voyeur. Just set up your nfs/smb/ftp/web/whatever file share with insecure permissions. On the share you might have something like a modified version of some popular executable (i.e. executable + Trojan), a malicious office file, a media file which overflows the pen-testers version of Media Player, etc. Build it and they *will* come.

I find 4 – 6 very interesting. You see, the files that the security team *rushes* to open are files that they would never even think about opening if you sent it to them. If you sent them an executable, they would run it in a sandbox after running 10 different virus scanners against it, reverse it, throw it on vmware and sniff every packet leaving the machine, etc. Yet, if they *find* the file, they’ll often open it without a second thought. Somehow, the thrill of the chase and the impending kill temporarily blind them. They are so intent on the hack that they don’t even notice the *hack*. Or, maybe it’s the fact that the target appears so weak and they think themselves so strong? I don’t know why these work so well, but they do. Don’t believe me? Set up an insecure share and crank up your machine at the next security conference.

Is there a point to all this? Sure. Audit your tools to ensure that they resist these sort of attacks, ensure that your processes address any possible shortcomings in your pen-test. And, lastly, check that ego at the door.

!Dmitry

Share

Hiring new technical security personnel in 2006

A security group is compromised (or should be comprised) of many different types of people. One of the subsets of the security group should be the engineers (or techies). These are the folks that will be ‘down in the weeds’ configuring firewalls, designing networks, pen-testing, writing or testing tools, etc. What skills should we be looking for in these people?

When hiring new security engineers, some (many?) of us will be looking for Education. Some will be looking for credentials or certifications. Some of us will be looking for experience. Here’s what I’ll be looking for (in order of preference).

1) Honesty. Don’t let the fox in with the hens.

2) Drive. If a person loves what they are doing, they will spend more time doing it. With respect to infosec, these sort of ‘driven’ individuals will rapidly absorb and retain security-related information. Look for these people to traverse the learning curve very quickly.

3) Critical thinking. In my opinion, it’s not what you know, it’s how you deal with what you don’t know.

4) Real-world smarts (aka “common sense”). I need someone who can ask both the hard and the easy questions. Contrary to what Elton John would have us believe, “Why” often seems to be the hardest word to say.

5) Experience.

Traits 1 – 4 are MANDATORY. I won’t hire a ‘techie’ without those traits. Trait 5 is optional (i.e. nice to have on top of the important stuff).

Happy New Year and good luck with those new hires :-)

!Dmitry

Share

Asset categorization (or, why I like CVSS)

A security group *must* know the value of the assets that they are protecting. Ideally, you determine this value *before* designing your security infrastructure. You cannot design an optimized security architecture without defining critical assets…yet, I see it happening all the time. Security gets worked in on the back end. That’s a problem.

Along a similar vein, Vulnerability scanners are a great tool if deployed at the correct time and used correctly. However, a vulnerability scanner cannot tell you the monetary worth of the system that it has just scanned. I’ve seen too many companies that crank up Nessus, run a scan of an entire /16 block, and then start remediating from the top of the report to the bottom. Again, that’s a problem.

So, how does that tie into CVSS? Well, CVSS is a system for assigning a numeric value to a specific flaw. There are a number of factors which go into determining this value; however, the end result is just a positive integer between 0 and 10. This information, coupled with the asset value, gives you a clearly defined list of remediation priority. Multiply the asset value with the CVSS ranking. Presto! You have a prioritized list to give to your Compliance team.

!Dmitry

Share