there is a current on-going internet emergency: a critical 0day vulnerability currently exploited in the wild threatens numerous desktop systems which are being compromised and turned into bots, and the domain names hosting it are a significant part of the reason why this attack has not yet been mitigated.
this incident is currenly being handled by several operational groups.
this past february, i sent an email to the reg-ops (registrar operations) mailing list. the email, which is quoted below, states how dns abuse (not the dns infrastructure) is the biggest unmitigated current vulnerability in day-to-day internet security operations, not to mention abuse.
while we argue about this or that tld, there are operational issues of the highest importance that are not being addressed.
the following is my original email message, elaborating on these above statements. please note this was indeed just an email message, sent among friends.
date: fri, 16 feb 2007 02:32:46 -0600 (cst)
from: gadi evron
subject: [reg-ops] internet security and domain names
hi all, this is a tiny bit long. please have patience, this is important.
on this list (which we maintain as low-traffic) you guys (the
registrars) have shown a lot of care and have become, on our sister mitigation and research lists (those of you who are subscribed), an integral part of our community we now call “the internet security operations community”.
we face problems today though, that you can not help us solve under the current setting. but only you can help us coming up with new ideas.
day-to-day, we are able to report hundreds and thousands of completely bogus phishing and other bad domains, but both policy-wise and resources-wise, registrars can’t handle this. i don’t blame you.
in emergencies, we can only mitigate threats if one of you or yours are in control.. just a week ago we faced the problem of the dolphins stadium being hacked and malicious code being put on it:
1. we tracked down all the ip addresses involved and mitigated them (by we i mean also people other than me. many were involved).
2. we helped the dolphins stadium it staff take care of the malicious code on their web page – specifically gary warner).
3. we coordinated with law enforcement.
4. we coordinated that no one does a press release which will hurt law enforcement.
5. we did a lot more. including actually convincing a chinese registrar to pull one of the domains in question. a miracle. there was another domain to be mitigated, unsuccessfully.
one thing though – at a second’s notice, this could all be for nothing as the dns records could be updated with new ip addresses. there were hundreds of other sites also infected.
even if we could find the name server admin, some of these domains have as many as 40 nss. that doesn’t make life easy. then, these could change, too.
this is the weakest link online today in internet security, which we in most cases can’t mitigate, and the only mitigation route is the domain name.
every day we see two types of fast-flux attacks:
1. those that keep changing a records by using a very low ttl.
2. those that keep changing ns records, pretty much the same.
now, if we have a domain which can be mitigated to solve such
emergencies and one of you happen to run it, that’s great…
however, if we end up with a domain not under the care of you and yours.. we are simply.. fucked. sorry for the language.
icann has a lot of policy issues as well, and the good guys there can’t help. icann has enough trouble taking care of all those who want money for .com, .net or .xxx.
all that being said, the current situation can not go on. we can no longer ignore it nor are current measures sufficient. it is imperative that we find some solutions, as limited as they may be.
we need to be able to get rid of domain names, at the very least during real emergencies. i am aware how it isn’t always easy to distinguish what is good and what is bad. still, we need to find a way.
members of reg-ops:
what do you think can be conceivably done? how can we make a difference which is really needed on today’s internet?
please participate and let me know what you think, we simply can no longer wait for some magical change to happen.
thousands of malicious domain names and several weeks later, we face the current crisis. the 0day vulnerability is exploited in the wild, and mitigating the ip addresses is not enough. we need to be able to “get rid” of malicious domain names. we need to be able to mitigate attacks on the weakest link – dns, which are not necessarily solved by dns-sec or anycast.
on reg-ops and other operational groups, we came up with some imperfect ideas on what we can make happen on our own in short term which will help us reach better mitigation, as security does not seem to be on the agenda of those running dns:
1. a system by which registrars can acknowledge confirmed bad domains (under strict guidelines) and respond to the reports according to their aup and icann policy, thus “getting rid” of them in a much quicker fashion, is being set up at the isotf.
a black list for registrars, if you will. this is far from perfect and currently slow-going. naturally, this can not be forced on all registrars, nor do the black hat ones, care.
2. a black list for resolvers (hopefully large service providers) is also being created at the isotf, so that the risk of visibility of bad domains, as will be defined, can be minimized. naturally, no provider can be forced to use this list and there are millions of unaffiliated resolvers, etc.
other options that have been raised as technically possible, but considered unlikely and indeed, bad:
3. setting up a black list of domain names for tld servers, for them not to respond on.
4. creating an alternate root which we could trust.
another suggestion which was raised:
5. apply to change the icann policy.
we need a solution. this operational issue needs to be added as a main agenda item today so that tomorrow we will be ready to mitigate it. i blame myself to some degree for not raising this with higher echelons 2 and 3 years ago due to respect to those who have been working on dns for many years, but what’s done is done.
the operational communities do not always know how to voice their needs or the difficulties they face. nor will everyone agree on what the issues are. it is my strong belief (which is obviously my personal opinion), based on facts we see in daily security operations on the internet that this issue is paramount, and i am sending here a call for help to the dns experts of the world: what is our next step to be?
what do we currently intend to do (not my personal opinion):
we are formalizing a letter to icann’s ssac, as they are the top experts on dns infrastructure security issues, coming from operational folks at the isotf dealing with daily usage of the dns for abuse purposes (and specifically fastflux).
further, the isotf is moving forward with items #1 and #2 as mentioned above. #3 will have to remain as a contingency, #4 we have no influence to affect. #5 is currently being explored.
are we missing a possible solution? what does the larger community suggest?
You can find their PDF document here:
jamie riden, ryan mcgeehan, brian engert and michael mueter just released an honeynet paper on web security called: know your enemy: web application threats.
the paper is very good, and deals with all kinds of web threats such as sql injection and xss. of most interest to me were the code injection and remote code-inclusion, as you remember we published a paper of our own this month on these specific issues in the virus bulletin magazine. the honeynet paper deals with many issues other than these, and is most definitely recommended reading.
in our paper we linked to an older paper by jamie riden. these guys know what they are talking about.
we touched on this subject in the past, but recently rich kulawiek wrote a very interesting email to nanog to which i replied, and decided to share my answer here as well –
i stopped really counting bots a while back. i insisted, along with many friends, that counting botnets was what matters. when we reached thousands we gave that up.
we often quoted anti-nuclear weapons proliferation sentiments from the cold war, such as: “why be able to destroy the world a thousand times over if once is more than enough?” we often also changed it to say “3 times” as redundancy could be important. :>
today, it is clear the bad guys can get their hands on as many bots as they need, or in a more scary scenario, want. they don’t need that many.
as a prime example, i believe that verisign made it public that only 200 bots were used in the dns amplification attacks against them last year. even if they missed one, two or even three zeroes, it speaks quite a bit as to our fragile infrastructure.
As we already know, vulnerabilities are evolving. In the past, the worst case we could imagine was vulnerability in a service which we run on our own server. After 2000, increasing worm and dDoS trends in the vulnerability market changed our priority to the rest of Internet: Clients.When we remember the worms and dDoS attacks which paralyzed backbones, it’s clear that we expect the worst case from client threats.
The most popular vulnerability of this past week (I ignore MS-patches) seems to be SunOS telnetd. In IRC channels and security forums, people say “Woaaw! Hey! You heard that?” Everybody is talking about this.
Another vulnerability published this week got lost in the SunOS noise: uTorrent.
read this before reading this blog entry.
this was posted to bugtraq today. let’s see what this is about…
date: thu, 15 feb 2007 13:02:46 -0800
from: zulfikar ramzan
subject: drive-by pharming threat
once the user’s machine receives the updated dns settings from the router (e.g., after the machine is rebooted) future dns request are made to and resolved by the attacker’s dns server.
the main condition for the attack to be successful is that the attacker can
guess the router password (which can be very easy to do since these home
routers come with a default password that is uniform, well known, and often
never changed). &amp;amp;amp;nbsp;note that the attack does not require the user to download
any malicious software – simply viewing a web page with the malicious
we\’ve written proof of concept code that can successfully carry out the
steps of the attack on linksys, d-link, and netgear home routers. &amp;amp;amp;nbsp;if users
change their home broadband router passwords to something difficult for an
attacker to guess, they are safe from this threat.
additional details on the attack can be found at:
sr. principal security researcher
advanced threat research
this message (including any attachments) is intended only for the use of
the individual or entity to which it is addressed and may contain
information that is non-public, proprietary, privileged, confidential, and
exempt from disclosure under applicable law or may constitute as attorney
we’ve written proof of concept code that can successfully carry out the steps of the attack on linksys, d-link, and netgear home routers. if users change their home broadband router passwords to something difficult for an attacker to guess, they are safe from this threat.
additional details on the attack can be found at:
sr. principal security researcher
advanced threat research
in discussions of this issue, fergie (paul ferguson) said, and i replied:
on fri, 16 feb 2007, fergie wrote:
> i don’t know — i found this whole “report” somewhat dubious, if
> not downright opportunist: hasn’t this “vulnerability” basically
> existed since, like, forever?
> i write it off as marketing opportunism… among other things.
well duh. think rsa and a brand new idea they did a pr about – phishing mitm kit (think phishing: user >> fake site >> bank).
nothing is really new in security, we have seen malware/etc. change the hosts file for years now, not to mention domain hijacking.
we have also seen wireless brute-forcing/etc./what-not.
the one thing about the folks at symc who did this release is that they actually know their ****. meaning, someone took these two technology ideas and made something new from them, which is:
break into wireless routers and put your dns server in them for hijacking purposes. symantec just reported it to us.
it’s cool, it’s “new” and it won’t be a huge problem quite yet.
i remember a thread from nanog a couple of years back when i mentioned google and all these other national/international wireless providers better be ready with physical operational folks that will track down rougeaps, etc. cop cars with triangulation devices?
it was a vulnerability waiting to happen which wasn’t exploited, meaning it didn’t get much attention. this is much like the days when bots weretrojan horses as botnets didn’t yet exist.
wireless used to be used for hacking into a network-connected machine, now it is suddenly used for the sake of it being wireless. still network-connected as a goal, but it is no longer just tcp/ip which playsthe game.
good news: these are dns servers we can take-down. fun, yet another escalation war.
this is very interesting, although not too exciting. nice work by the guys at symantec.
websense just released a blog post on how sites get defaced for malicious purposes other than the defacement itself, such as installing malicious software on visiting users.
this is yet another layer of abuse of web server attack platforms.
you can find their post here:
are file inclusion vulnerabilitiess equivalent to remote code execution? are servers (both linux and windows) now the lower hanging fruit rather than desktop systems?
in the february edition of the virus bulletin magazine, we (kfir damari, noam rathaus and gadi evron (me) of beyond security) wrote an article on cross platform web server malware and their massive use as botnets, spam bots and generally as attack platforms.
web security papers deal mostly with secure coding and application security. in this paper we describe how these are taken to the next level with live attacks and operational problems service providers deal with daily.
we discuss how these attacks work using (mainly) file inclusion vulnerabilities (rfi) and (mainly) php shells.
further, we discuss how isps and hosting farms suffer tremendously from this, and what can be done to combat the threat.
important note: the name of the web honeynet project has been changed to the web honeynet task force to avoid confusion with the honeynet project.
[ warning: this post includes links to live web server malware propagated this wednesday via file inclusions exploits. these links are not safe! ]
the newly formed web honeynet project from securiteam and the isotf will in the next few months announce research on real-world web server attacks which infect web servers with:
tools, connect-back shells, bots, downloaders, malware, etc. which are all cross-platform (for web servers) and currently exploited in the wild.
the web honeynet project will, for now, not deal with the regular sql injection and xss attacks every web security expert loves so much, but just with malware and code execution attacks on web servers and hosting farms.
these attacks form botnets constructed from web servers (mainly iis and apache on linux and windows servers) and transform hosting farms/colos to attackplatforms.
most of these “tools” are being injected by (mainly) file inclusion attacks against (mainly) php web applications, as is well known and established.
php (or scripting) shells, etc. have been known for a while, as well as file inclusion (or rfi) attacks, however, mostly as something secondary and not much (if any – save for some blogs and a few mailing list posts a year ago) attention was given to the subject other than to the vulnerabilities themselves.
the bad guys currently exploit, create botnets and deface in a massive fashion and force isps and colos to combat an impossible situation where any (mainly) php application from any user can exploit entire server farms, and where the web vulnerability serves as a remote exploit to be followed by a local code execution one, or as a direct one.
what is new here is the scale, and the fact we now start engaging the bad guys on this front (which so far, they have been unchallenged on) – meaning aside for research, the web honeynet project will also release actionable data on offensive ip addresses, urls and on the tools themselves to be made availableto operational folks, so that they can mitigate the threat.
it’s long overdue that we start the escalation war with web server attackers, much like we did with spam and botnets, etc. years ago. several folks (andquite loudly – me) have been warning about this for a while, now it’s time to take action instead of talk.
note: below you can find sample statistics on some of the web honeynet project information for this last wednesday, on file inclusion attacks seeding malware.
you will likely notice most of these have been taken care of by now.
the first research on the subject (after looking into several hundred such tools) will be made public on the february edition of the virus bulletin magazine, from:
kfir damari, noam rathaus and gadi evron (yours truly).
the securiteam and isotf web honeynet project is supported by beyond security ( http://www.beyondsecurity.com )..
special thanks (so far) to: ryan carter, randy vaughn and the rest of the new members of the project.
for more information on the web honeynet project feel free to contact me.
also, thanks for yet others who helped me form this research and operations hybrid project (you know who you are).
sample report and statistics (for wednesday the 10th of january, 2007):
ip | hit count | malware (count), … |
22.214.171.124 | 12 | http://m embers.lycos.co.uk/onuhack/cmd1.do? (4),
http://m embers.lycos.co.uk/onuhack/injek.txt? (6),
http://m embers.lycos.co.uk/onuhack/cmd.do? (2),
126.96.36.199 | 11 | http://w ww.clubmusic.caucasus.net/administrator/cmd.gif? (more…)
1. at ccc last week raven alder gave a talk on the subject (router and infrastructure hacking), which was pretty neat!
i figure some of you may enjoy this. i hope the video for her talk becomes available soon.
2. there was also a lecture on sflow, by elisa jasinska:
presentation and paper:
3. i do wish the talk on how ccc set up their multiple-uplink gige network for the conference was filmed, i call this type of “create an isp in 24 hours”, in a very very hostile and busy environment such as at defcon or ccc “extreme networking”.
they got their own asn for 4 days. set up a hosting farm, surfing, mass wireless, etc. for users, and what-not. discovered a wireless network vulnerability, a router dos with nexthop memory issues, etc.
not to mention having to fight off ddoss non stop, fake aps, thousands of active and abusive users and bgp (i really liked their presentation on ripe’s bgplay – very cool stuff - http://www.ris.ripe.net/bgplay/ ).
3000 end points. 1.6 gigs up, 1.0 gigs down.
their slides are up at:
as mentioned before, ccc itself was very good and a lot of fun, there are many other presentations and videos available for download:
other presentations i enjoyed, which i just noticed online:
pdf george danezis, introducing traffic analysis
wmv gadi evron, fuzzing in the corporate world (yes, mine)
What do they have in common?
hey, do i smell history repeating itself? bots on irc used to be useful too, and then used for local flooding. only later did they become the botnets that they are today.
so, from automated playing when you are not around to keep stuff active (rings a bell?) to botnets that throw… privates at people.
worth a read. i always love when the real world and the virtual meet, whether by marriages or by physical world police taking complaints because “someone stole my weapon on world of worldcraft!!”
we do live in interesting times.
in this post ( http://www.phenoelit.net/lablog/irresponsible.sl ), fx describes a drop zone for a phishing/banking trojan horse, and how he got to it.
go fx. i will refrain from commenting on the report he describes from secure science, which i guess is a comment on its own.
we had the same thing happen twice before in 2006 (that is worth mentioning or can be, in public).
once with a very large “security intelligence” company giving drop zone data in a marketing attempt to get more bank clients (“hey buddy, why are 400 banks surfing to our drop zone?!?!)
twice with a guy at defcon showing a live drop zone, and the data analysis for it, asking for it to be taken down (it wasn’t until a week later during the same lecture at the first isoi workshop hosted by cisco). for this guy’s defense though, he was sharing information. in a time where nearly no one was aware of drop zones even though they have been happening for years, he shared data which was valuable commercially, openly, and allowed others to clue up on the threats.
did anyone ever consider this is an intelligence source, and take down not being exactly the smartest move?
it’s enough that the good guys all fight over the same information, and even the most experienced security professionals make mistakes that cost in millions of usd daily, but publishing drop zone ips publicly? that can only result in a lost intelligence source and the next one being, say, not so available.
i believe in public information and the harm of over-secrecy, i am however a very strong believer that some things are secrets for a reason. what can we expect though, when the security industry is 3 years behind and we in the industry are all a bunch of self-taught amateurs having fun with our latest discoveries.
at least we have responsible folks like fx around to take care of things when others screw up.
i got tired of being the bad guy calling “the king is naked”, at least in this case we can blame fx.
it’s an intelligence war people, and it is high time we got our act together.
i will raise this subject at the next isoi workshop hosted by microsoft
( http://isotf.org/isoi2.html ) and see what bright ideas we come up with.
Posted on December 23rd, 2006 by SecuriTeam
Filed under: Botnets, Commentary, Corporate Security, Culture, DDoS, Insider Threat, Law, Microsoft, Networking, Phishing, Physical Security, Rootkits, Spam, Virus, Web | 3 Comments »
a few months back i released a post on where i think anti-botnets technology is heading. now it’s time for what happened in 2006, and what we can expect from here on.
i am not a believer in such retrospective looks, as often, they are completely biased and based on what we have seen and what we want to see. this is why i will try and limit myself to what we know happens and is likely to get attention, as well as what we have seen tried by bad guys, which is working for them enough to take to the next level.
what changed with botnets in 2006:
1.botnets reached a level where it is unclear today what parts of the internet are not compromised to an extent. count by clean rather than infected.
2. botnets have become the most significant platform from which virtually any type of online attack and crime are launched. botnets equal an online infrastructure for abusive or criminal activity online.
3. in the past year, botnets have become mainstream. from a not existent field even in the professional realm up to a few years ago, where attacks were happening constantly reagrdless, it has turned to the main buzzword and occupation of the security industry today, directly and indirectly.
4. websites have returned to being one the most significant form of infection for building botnets, which hadn’t been the case since the late 90s.
5. botnets have become the moving force behind organized crime online, with a low-risk high-profit calculation.
6. new technologies are finally being introduced, moving the botnet controllers from using just (or mainly) irc to more advanced c&c (command and control) channels such as p2p, or multi-layered, such as dns and irc on the osi model.
7. botnets used to be a game of quantity. today, when quantity is assured, quality is becoming a high concern for botnet controllers, both in type of bot as well as in abilities.
what’s going to happen with botnets in 2007:
botnets won’t change. all will remain the same as it has been for years. awareness however, will increase making the problem appear larger and larger, perhaps approaching its real scale. the bad guys would utilize their infrastructure to get more out of the bots (quality once quantity is here) and be able to do more than just steal cash. maximizing their revenue.
further, more and more attackers unrelated to the botnet controllers will make use of already compromised systems and existing botnets to gain access to networks, to facilitate anything from corporate espionage and intelligence gathering, to shame-less and open show of strength to those who oppose them (think blue security), in the real world as well as the cyber one (which to the mob is one and the same, it’s the income that speaks).
meaning, the existing botnets infrastructure will be utilized both in an open fashion, due to the fact online miscreants (real-world mob) face virtually no risk, as well as quiet and secretive uses for third-party intelligence operations.