Recursive DNS servers as a growing DDoS problem

hi guys.

we discussed recursive dns servers before (servers which allow to query anything – including what they are not authoritative for, through them).

the attack currently in the wild is a lot bigger and more complicated than this, but to begin, here is an explanation (by metaphor) of that part:

spoofed icmp attacks have been around for a while. how many of us still get quite a bit of icmp echo replies stopped at our borders? these replies come to us due to spoofed attacks using our addresses.

now, imagine it – only bigger:

introduce an amplification effect.

as bigger udp packets will be fragmented by the servers, and udp obviously does not do any handshake and can easily be spoofed…
the server receives a large packet, breaks it down to several fragments and moves the query on.
that’s where the amplification effect starts.

both the attacked server and the unwilling participant in the attack, the recursive servers, experience a serious dns denial of service that keeps getting amplified considerably.

one of the problems is obviously the spoofing. let us, metaphorically and wrongly treat it for a minute as the remote exploit.

the second part of this problem is the recursive server, which for the moment we will wrongly treat as the local exploit.

obviously both need to be fixed. which is easier i am not so sure.

in the past, most network operators refused to implement best practices such as bcp38 (go fergie!) because they saw no reason for the hassle. returning back to: “if it isn’t being exploited right now, why should i worry about it?”

well, it is being exploited now, and will be further exploited in the future. combating spoofing on the internet is indeed important and now becoming critical.

removing the spoofing part for a second, the attack vector for this can easily be replaced, as one example, with a botnet.

a million trojaned hosts sending in even one packet a minute would cause quite a buzz – and do. now amplify the effect by the recursive servers and…

so, putting the spoofing aside, what do we do about our recursive servers?

there are some good url’s for that, here are some:

the recursive behaviour is necessary for some authoritative servers, but not for all. as a best practice for organizations, as an example, the server facing the world should not also be the one facing your organization (your users/clients). limiting this ability to your network space is also a good idea.

if you would like to check for yourselves, here is a message from duane wessels [1] to the dns-operations [2] mailing list where this is currently being discussed:

if anyone has the need to test particular addresses for the presence of open resolvers, please feel free to use this tool:

it will send a single “recursion desired” query to a target address.
if that query is forwarded to our authoritative server, the host has an open resolver running at that address.

dan (da man) kaminsky and mike schiffman have done some impressive work on this subject, outlined in dan’s latest shmoocon talk.
they found ~580k open resolvers:,

i suggest those of us who need more information or help go to the dns-operations mailing list from oarc (see below) and ask the experts there, now that this is finally public.


full technical details on how the attack works at:

gadi evron,

[1] duane wessels – dns genius and among other accomplishments the author of dns top.
[2] dns-operations –

[changed title from: recursive dns servers ddos as a growing ddos problem]


Microsoft and week-lasting Security Advisory fix process [UPDATED]

The fixing process of Microsoft Security Advisories page is not the fastest I have seen.

On 16th February I noticed several mysterious non-working links at advisory Vulnerability in Windows Service ACLs. Service Pack download links and the CVE-2006-0023 reference pointed to the following target directory: Settings/Temporary Internet Files/Local Settings/Temporary Internet Files/OLK4D

and its subdirectories. All folders located at OLK4D were named as ‘H’, J’ etc. Like we now, Windows uses names of these type when generating subdirectories to Temporary Internet Files folder. You can’t see these in Windows Explorer, you have to use Command Prompt, DIR/A is worth of trying ;-)

I have informed MSRC immediately after noticing of these errors. No need to say that clicking these links generated a typical “We’re sorry, the page you requested could not be found” 404 page. Microsoft fixed these links on Friday, Feb 24th, after _eight_ days.

There was a similar case related to Sober advisory #912920 earlier too. AV vendor links pointed to the Desktop folder. For example, McAfee’s W32/Sober link pointed to\'.

When visiting these links they were being redirected to Desktop Deployment page Odd.
I checked the HTML source code too and this was the result:

Real Symantec’s URL

pointed to


The next question is if these directories were from the internal publishing system or directories in workstations used in publishing process.

Update: Added information about previous issue in Sober.X Security Advisory 912920 etc.


OSX/Inqtana False Positive

It’s old news that Sophos briefly took their corporate eye off the ball and released an IDE (virus identity file) that incorrectly detected Inqtana.B in some application files on OS X Macs. While the incident seriously inconvenienced some users and sites by necessitating reinstallation of some misdiagnosed programs, the vendor did replace the offending file very quickly, apologised, and put in place measures to avoid a recurrence.

Worryingly, however, some have seen this incident as an argument for jettisoning commercial anti-virus in favour of an open source solution. Is there a place for volunteer AV in the workplace, though? As a supplement, sure, as long as the organization and the end-user realise the limitations of the genre. I don’t doubt the motives of the public-spirited purveyors of AV freeware. The AV commercial vendors are not whiter than white, and of course they have a commercial agenda, but they have to meet standards of functionality and support in order to stay in the market place. Perhaps now, when malware authors seem to have rediscovered the Mac platform, is not the best time to put all your worm-free Apples in one basket, or entrust the corporate crown jewels to software that doesn’t detect all known malware on that platform, offers no guarantees of freedom from future FPs, and doesn’t offer professional levels of service and technical support?


Enron – the pain keeps coming

Note: I posted this to slashdot along with proof of the Private data. It has not yet been approved.

A year (or more) ago, a large batch of Enron emails were released to the public. This data set has been very useful from a ‘Research’ perspective. Just this weekend, I was using it to test the speed of PCRE vs Python vs Perl…until I happened upon a little nugget of information which led me to look at the dataset from a Security/Privacy perspective.

It appears as if data is included within these emails which violates individual Privacy. The data includes, but is not limited to, Account information to non-Enron applications (FTP login credentials, web credentials, etc.), Parent-teacher school data, private residence addresses, private residence phone numbers, Names and Social Security Numbers, and more.

Where did the Enron emails come from? The United States Federal Energy Regulatory Commission. That’s sad.

Some examples (I stripped out the SSN or Credit Card number with X’s, and changed the name/address):

A Social Security Number

To: Patti Thompson/HOU/ECT@ECT
cc: Sally Beck/HOU/ECT@ECT, Shelly Jones/HOU/ECT@ECT
Subject: Summer Intern Information


The following intern will be in Sally’s department this summer:

Name Start Date SS#

Jane Doe May 22, 2000 XXX-XX-XXXX

Please let me know the CO# and Cost Center#.

If you have any questions, I can be reached at x35850.

Thank you.


Another Social Security Number

Subject: Tom Hopwood
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Badge #15518 – SS # XXX-XX-XXXX

A Credit Card purchase

Date: Thu, 10 May 2001 08:07:00 -0700 (PDT)
Subject: Re: eBay End of Auction – Item # 1236142249
Mime-Version: 1.0^
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

199.99+ $18 o/n shipping = 217.99

Visa 4128 XXXX XXXX XXXX exp X/XX

shipping and billing address :
John Arnold
Houston, TX 77002


Reports say OS X 10.4.5 cracked for non-Apple Intel PCs

According to new RealTechNews article

“… today a hacker named Maxxuss released a patch which updates MacOS to 10.4.5 and enables it to run on non-Apple Intel-based PCs.”

This hasn’t been covered in the news at all, in fact.

The article links to Maxxuss Release Announcements page, which has ‘Last Updated: 23-Feb-2006‘ information, in fact.

The weblog of Maxxuss, announcing ‘non-official information on Mac OS X for the x86 platform’, is located at

This was only a week after news about a poetry Don’t-Steal-Mac-OS-X embedded into OS X.


Bypassing SSL in Phishing

here is a bit of “new stuff” (now old) that now becomes partially public from our friends at f-secure, and is very disturbing.

rootkits, ssl and phishing:

haxdoor is one of the most advanced rootkit malware out there. it is a kernel-mode rootkit, but most of its hooks are in user-mode. it actually injects its hooks to the user-mode from the kernel — which is really unique and kind of bizarre.

so, why doesn’t haxdoor just hook system calls in the kernel? a recent secure science paper has a good explanation for this. haxdoor is used for phishing and pharming attacks against online banks. pharming, according to anti-phishing working group (apwg), is an attack that misdirects users to fraudulent sites or proxy servers, typically through dns hijacking or poisoning.

we took a careful look at (detection added 31 jan, 2006). it hooks http functionality, redirects traffic, steals private information, and transmits the stolen data to a web-server controlled by the attacker. most (all?) online banks use ssl encrypted connections to protect transmissions. if haxdoor would hook networking functionality in the kernel, it would have hard time phishing since the data would be encrypted. by hooking on a high-enough api level it is able to grab the data before it gets encrypted. apparently haxdoor is designed to steal data especially from ie users, and not all tricks it plays work against, for example, firefox.

financial organizations that rely on encryption for security of web transactions can contact me for details on who to actually contact on answers if they haven’t been contacted by now, as this is the least of their worries.

gadi evron,

[corrected the title from: bypassing ssh in phishing]


Several bugs fixed in – Bugzilla

Several security advisories have been released about three fixed security issues on the Bugzilla bug-tracking system. Even systems developed for software bug tracking purposes have their own bugs. ;-)

More details about these issues is located at new Secunia’s SA18979 advisory (all issues described), BID16738 (Whinedays parameter issue) and BID16745 (user credential redirect issue). There is no separate Bugtraq ID related to RSS reader title encoding issue (this is more a XSS issue related to RSS readers than bug in Bugzilla itself). A more detailed description about SQL injection type ‘Whinedays’ issue is located at Bugzilla Bug #312498 entry. Secunia’s severity level is Moderately Critical; 3/5. It seems that this vulnerability report is the first rated as Moderately Critical after December, 2003 (Secunia’s Product database has more details if You are interested). FrSIRT rated these issues as Moderate Risk.

From the SA18979 advisory:

#1: Input passed to the “whinedays” parameter in “editparams.cgi” isn’t properly sanitised before being used in a SQL query.
What are the risks of this vulnerability?
This can be exploited to manipulate SQL queries by injecting arbitrary SQL code.
#2: The problem is that some RSS readers decodes encoded HTML in feed titles.

What are the risks of this vulnerability?
This can be exploited to inject arbitrary HTML and script code, which will be executed in a user’s RSS readers session in context of an affected site when the malicious user data is viewed.
#3: The problem is that users may send login requests to an incorrect web site when the URL contains a double slash in the path name.

And what are the risks?
Successful exploitation requires that the login page is a subdirectory of the web root and that the subdirectory is a resolvable address on the user’s network.

Original Bugzilla Security Advisory is located at Because of range of these issues all Bugzilla installations are reportedly advised to upgrade to the latest stable version 2.20.1. The Bugzilla advisory lists old Bugzilla 2.16.x versions as immune, however.

This is interesting:
Related to RSS reader encoding issue Bugzilla “prefers to shift to Atom feeds, where the RFC is unambiguous about HTML markup in feed titles”.

The reporters of these vulnerabilities live in several countries because of worldwide Bugzilla community, for example Teemu Mannermaa is from Finland. Mr. Mannermaa has discovered other Bugzilla issues earlier too, e.g. related to fixed version 2.16.11. Additionally, Myk Melez has been listed at SA17030 published in October too.
The recent Mozilla’s Bugzilla version is 2.20. Linux Kernel project uses version 2.16.10, in turn. Red Hat Bugzilla is one of the popular Bugzilla sites too. According to their Web site version 2.18-rh is in use.

Bugzilla Team didn’t only fixed security issues, the detailed Release Notes pages are located here.


PRstorm on eBay yet again..

the amazing spamhuntress writes in her blog about prstorm hitting ebay yet again, and asks: “is there anything we can do to get this piece of shit yanked once and for all?”

prstorm is about referrer spam (web spammers). here you can find more about them.

ebay link

unbelievably, people actually bid on it.

gadi evron,


The Domain Name Service as an IDS

“how dns can be used for detecting and monitoring badware in a network”

this is a very interesting although preliminary work by obviously skilled people. i haven’t learned much but i am extremely happy others work on this than the people i already know! they also weren’t too shy with credit, mentioning florian weimer and his passive dns project already at the abstract (quoted below). they even mention me for some reason.

great paper guys!

moving past passive dns replication and blacklisting, they discuss what so far has been done for years using dnstop, and help us take it to the next level of dns monitoring.

someone should introduce them to duane wessels’ (from isc oarc) follow-up dnstop project, dsc. :)
[duane's lecture on the tool at the 1st dns-oarc workshop]

there has been some other interesting work done in this area by our very own david dagon from georgia tech:
[presentation from the 1st dns-oarc workshop] botnet detection and response – the network is the infection:
[paper] modeling botnet propagation using time zones:

surfnet is looking for technologies to expand the ways they can detect network traffic anomalies like botnets. since bots started using domain names for connection with their controller, tracking and removing them has become a hard task. this research is a first glance at the usability of dns traffic and logs for detection of this malicious network activity. detection of bots is possible by dns information gathered from the network by placing counters and triggers on specific events in the data analysis. in combination with netflow information and ip addresses of known infected systems, detection of bots of network anomalies can be made visible. also the behavior of a bot can be documented and additional information can be gathering about the bot. using dns data as a supplement to the existing detection systems can give more insight in
the suspicious network traffic. with some future research, this information can be used to compile a case against particular types of bot or spyware and help dismantling a remote controlled infrastructure as a whole.

we started this research project with the question if the passive dns software of florian weimer was useful for bot detection. we immediately found out that the sensor of the passive dns software strips the source address from the collected data for privacy reasons, making this software not useful at all for our purpose. we deviated from the research plan (plan van aanpak) and took a more general approach to the question; ”is gathered dns traffic usable for badware detection”.

gadi evron,


Brian Krebs interviews a Bad Guy

brian krebs recently interviewed a botnet controller kiddie.

these people do kill. they do steal your aunty’s money and your dad’s pension. they destroy your thesis and have the feds knock on your door for crimes they committed. they cause the power to stop working. if it was true, i’d also like to blame them for things such as cancer and world hunger, i suppose i can’t though. worse still, these are just the kids.

i’d love to see an interview with the russian mob and their operations.

regardless, brian did a good job, like always.

also, there is a good post about it on taosecurity.

gadi evron,


Captcha implementation of PHP-Nuke poorly written

Several security advisories about the Captcha implementation in PHP-Nuke have been released.

The original report from Janek “waraxe” Vind states:

-Quote begins-
We can see, that challenge is called “$random_num” and response “$code” is constructed from various parts. And this algrithm means, that some specific challenge will have same response in following conditions:

1. It must be same day (because of the “$datekey”)
2. HTTP_USER_AGENT must be the same

So how to exploit this design weakness. First we need working challenge/response pair from “victim” server. For this let’s look at CAPTHA picture with numbers at login page.
Right mouse click on that picture and (in case of IE) –> properties–>address , and we can see picture url, something like this:
“http: // localhost/nuke78/modules.php?gfx=gfx&random_num=112652″
-Quote ends-

Secunia’s advice (workaround) is not to rely on the captcha feature to prevent automated logons to PHP-Nuke. SecurityFocus, in turn, warns that this flaw may be used to carry out other attacks against the login page. They list brute force attempts.

BTW: According to Secunia’s PHP-Nuke Product database

Currently, 23 out of 27 Secunia advisories, are marked as “Unpatched” in the Secunia database.

The original captcha model (“completely automated public Turing test to tell computers and humans apart”) itself is nine years old.


2 More OS X Inqtana Variants Found

What the hell is this, let’s target OS X week? This is directly from the guys over at F-Secure labs, they’ve just found 2 more variants of the OS X worm Inqtana.A, the variants are names Inqtana.B and Inqtana.C. The only difference is the way that the worm will start on the infected machine once the user has accepted the OBEX transfer.

More details on this can be found on the F-Secure blog
Guess this means that OS X is finally being taken seriously out there, and about time too.
What’s everyone’s thoughts on all the OS X action we’ve been seeing lately?


Yet another OS X security issue.

In the last two weeks, we’ve had Leap.A, Inqtana.A and now a vulnerability with the way that Apple’s web browser Safari, and it’s mail application handle the opening and executing of certain file types by default. This issue is mainly concerned with the opening of .zip files on OS X, and the malicious possibilities are endless on this one.

This vulnerability has been discovered by Michael Lehn The culprit of this vulnerability is in the default configuration of Apple’s Safari web browser. In it’s default configuration the option to “Open ‘safe’ files after downloading” is enabled. The function of this option is to automatically display, documents, spreadsheets, movies and images as soon as they are downloaded to the users computer, by opening them with the application associated with the file type.

The vulnerability comes into play when you store a shell script in a ZIP archive without including the ‘shebang line’ (#!/bin/bash) in the shell script. As soon as you omit the ‘shebang line’, Safari will no longer recognise the script as potentially dangerous content, and executes the shell script without any confirmation needed by the user.

The shell script will get executed within the by a shell. If the user has configured Finder to open scripts using, this will happen automatically, without any intervention on the users part. If you give the script an extension, such as “jpg” or “mov” and then store it within a ZIP archive, OS X will add a binary metadata file to the archive which determines the files association. What this metafile does is instruct the operating system on any other Mac to open that file with — regardless of the extension or the symbol displayed in Finder. The terminal will then re-direct scripts without an interpreter line directly to bash, the standard UNIX shell in OS X.

The immediate action that OS X users should be taking against this right now is to deactivate the “Open ‘safe’ files after downloading” option in the Safari preferences pane. An additional security measure is to move the from /Applications/Utilities into a different folder altogether, this is because the metadata file within the ZIP archive always contains the absolute path to the application to be used to open/execute the file. The only issue with doing this is that when you apply security patches/system updates to OS X, the application must be moved back into it’s original location, otherwise it could cause problems in applying the updates.

To determine if you are vulnerable Heise Security have a safe online demonstration available here. This demo attempts to open to display the contents of a folder. If you are running OS X in it’s default configuration and use Safari, the window will open without waiting for a prompt from the user. The possibilities of what this script could do are endless, and I am going to leave that part to everyone’s imagination. Feel free to submit comments on the worst possible thing you could do with shell script running under the currently logged on user running Safari ;-)


New MSN Search & Win campaign search site hosted in France

Like some of our readers know, MSN has started its Search & Win campaign exactly one week ago. The UI of the search page itself is Flash-based and it’s located at Some details about the contest:
MSN will give users a chance to win prizes of $1 million by using this search engine. There are about 1,200 separate keywords linked to prizes, i.e. per month. The campaign will end at the end of April. Reportedly user will get information about possible prize after submitting his or hers search query. The prize list includes digital cameras, Xboxs, MP3 players, plasma TVs, trips etc. ‘If a link appears on the search results page with the words MSN Search & Win, click the link to see if you instantly won’, says the Help screen.

I decided to do some WHOIS queries yesterday and found a few interesting things:

1. The WHOIS results for this IP says:

inetnum: -
descr: Jaguar Network
country: FR
admin-c: JAGN-RIPE
tech-c: JAGN-RIPE
mnt-by: JAGUAR-MNT
changed: *********** 20050622
source: RIPE

When checking the domain listed is the Web site of Jaguar Network. The page is titled as ‘Jaguar Network – Network Operations Center’. No other domains are hosted by this French company, says Netcraft’s Top Sites Running report.

2. According to Netcraft this site uses Microsoft’s name servers:

DNS admin:
Nameserver Organisation: Microsoft Corporation, One Microsoft Way, Redmond, 98052, United States

There is no information what are the connections between Microsoft and Jaguar Network.

3. This is for U.S. customers only. From the point of privacy, I’m interested if this contest will need detailed registration:
Bush Administration Demands Search Data; Google Says No; AOL, MSN & Yahoo Said Yes

Reportedly Yahoo! is planning a similar campaign.
Feel free to comment.

Juha-Matti Laurio


Plupii.C proved: Remarkable old Mambo CMS installations in use

Systems behind content management system based Web sites are not always patched. Delays when patching systems are not weeks. In fact, they are more than months.

The XML-RPC for PHP vulnerability from June 2005 is not the only security issue being exploited in this new Linux worm case. One of the other vulnerabilities is GLOBALS['mosConfig_absolute_path'] issue CVE -2005-0512, reported and fixed exactly one year ago. This code injection issue affects Mambo systems 4.5.2 and earlier.

At this time, Mambo defacemect reports from volunteers who helped the Internet Storm Center to make a conclusion that a new Plupii variant is spreading. Sometimes even the word ‘mambo’ in the URL helps confirming Mambo sites being as target of defacement; see new ones at etc.

A fixed Mambo version is available, but administrators simply didn’t patched their systems.


“if you are not doing anything wrong, why should you worry about it?”

our friend alex eckelberry over at sunbelt’s blog writes about houston’s police chief harold hurtt, who seems to love cameras and to think big brother is all in your mind.

“if you are not doing anything wrong, why should you worry about it?”

even i can’t deny the need or the effectiveness, and i can see how cameras can be good for the public and law enforcement protecting the public. london has been a great example of that.

still, like with many other such solutions, the perps just move elsewhere where cameras don’t cover their every move. whether it’s another city block or another city is another issue all-together. shuffling the trouble is always the best solution, right?

putting such technology in the hands of people who believe they should also see into your house and that if you’d like some privacy, you must be a criminal is rather amusing in how it is scary.

the main point being, that even if the current head honcho is a nice guy and all those who work for him (or her) are cool people, who is to say their followers will be. what’s to stop them from putting cameras in our showers, next? after all, do we have anything to hide? maybe we all just like to “help ourselves”…?
are there any limitations on what this will be used for after it is there? how do you enforce that?

with all the recent privacy issues in the states, finand, etc. i am becoming increasingly uncomfortable trusting those who are supposed to protect me.

i have always been a strong believer that just because solutions to something can potentially be abused, that is no reason not to find out what these solutions are.
as an example, most of us agree we need to fight terrorism, yet immediately make war on any attempt to do so. instead of killing every possible suggested solution i’d rather they fight on how it gets done.

to do it and leave it wide-open for abuse, however, should in my opinion be illegal. how you define what “wide-open for abuse” though is problematic, but getting less so in some sectors with the increasing popularity of industry standardization. at least where it is understood, and i don’t know of many who utilize these tools and really understand what standardization is about.

gadi evron,