2005′s BlackHat books, got `em?

There’s a rumour going around about Michael Lynn doing a book signing at this year’s defcon.

What will he be signing, you ask? Why, last year’s BlackHat books. Yes, the ones with the pages of his presentation torn out! :)
If the whispers are to be believed the income from this book signing would be donated to the EFF! Now, ain’t that cool?

In our opinion some of that money should go to cover Mike’s huge legal costs due to Ciscogate, but we are just rumour mongers! What do we know?

We wonder how much these would sell for on eBay, before and after? If they are sold now, their price is about to go up!


Practical DDoS mitigation techniques (and an interesting paper)

if you are the expert on distributed denial of service attacks, please skip the text and go directly to the link at the footer.

the advice below is only about bulk ddos attacks where a mass of traffic is bombarded your way. it is not aimed at isp’s, but rather at smaller operations.

important note:
first do what you don’t have to pay for. once you exhausted what you can do by yourself, check if further protection is necessary, a call to your isp might suffice.

should ddos concern you?
i may write about this at a later date, but have you ever been under such an attack? if you don’t know, that is your first problem. are you big enough to worry about such attacks? if you are a moms&pops business, probably not. if most of your networking concerns are operated by your isp – most likely not. if you have a large network, you most likely should consider creating a plan on how to handle a possible future attack.

i personally believe in:
[in no special order]

1. ddos mitigation mechanisms:

you get what you pay for. get a decent ddos protection mechanism and it will help you survive.
what’s cheap might end up costly.
i personally prefer cisco guard (ex riverhead guard).

please check suggestion #4 below.

2. bandwidth. bandwidth. bandwidth.

the more bandwidth you have the more secure you will be. this is no solution, but it works.
some buy so much bandwidth that if they go down, it is likely the network is down as well.

you don’t have to buy the world, but it is good practice to keep ahead of your regular day-to-day bandwidth needs.

3. better routers.

still own an ancient router that would die if it even faces a port scanner? maybe it is time to get a new one. adding extra ram may be a good alternative.
note: don’t buy what you don’t need. how would you know what you need? well, your should know your own network or hire someone to help you with that.

4. better relations with your isp.

maybe you can’t afford ddos mitigation mechanisms.. maybe you can. whichever the case having a good relation with your isp so that they can help you mitigate an attack is a great idea. knowing who to call at the isp ahead of time is also a good idea.

your isp won’t help? change isp’s.

4. configure your applications (and routers) securely.

as an example, make sure your web application doesn’t hammer the database. also try to test your web servers for load handling. if your server can’t take it, check why. is it the hardware? the network? is the application crappy or mis-configured? basically what is the bottle-neck and is there a specific failure point?

check out team cymru’s router configuration examples: http://www.cymru.com.

that’s just the tip of the ice berg, good security starts with good planning and continues with testing.

there’s more, but first make sure you cover your bases and talked to your own isp (anyone see a trend here?).

if ddos is your thing, check out this new interesting paper.

also, if you fear online extortion please consider my take on the issue:

gadi evron,


Cisco, haven’t we learned anything? (technician reset)

in this recent cisco advisory, the company alerts us to a security problem with cisco mars (cisco security monitoring analysis and response system).

the security issue is basically a user account on the system that will give you root when accessed.

the account is:
1. hidden.
2. default.
3. with a pre-set password.

in other words, this is a journey back 10 years when technicians would commonly have special keys (actual keys, electronics or passwords) to access a device if they have to troubleshoot it for anything, or say… the user lost his password.

people used to trade these keys online and hidden accounts were a thing of common practice. today people still trade commonly used default passwords but it is not as popular as it used to be, at least in the online world.

on the other hand, the most common practice to hack routers today, is still to try and access the devices with the notoriously famous default login/password for cisco devices: cisco/cisco.

cisco/cisco is the single most used default password of our time. it got more routers pwned than any exploit in history, and it still does. one would think that a company such as cisco, especially with this history, would stay away from such “default” accounts… but the fact that this account is hidden makes it something different.

it makes it a backdoor. one much like those used by the bad guys.

now… if cisco knowingly put it there, shame on them. if somebody put it there without their knowledge… well, shame on them.

this is indeed a vulnerability, as in a weakness. it is not however a software coding bug that may result in say… a buffer overflow. it is a part of the design of the system.
cisco disclosing this is very nice and commendable, but perhaps they should also let us know whether this was indeed a backdoor somebody put in their system or if it was part of the design?

i love eastereggs. i just don’t like surprises in system privileges or backdoors, especially not in a security monitoring and response product.

i very much doubt it was anything else but a part of the design but that should be admitted to.
as the advisory states:

no other cisco products are currently known to be affected by this vulnerability.

okay, but how about other vulnerabilities of this type? are there any more backdoors in other cisco products?
if not, why wouldn’t they just come out and say that?
“there are no other such backdoors in our products”.

i’d even be happy with:
“to our knowledge, there are no other vulnerabilities of this type in our products.”

this is not a bug. one can never be sure all bugs are eliminated — however hard one may try.
one can admit to having no such features in other products, though.

once again we fall upon re-naming of a feature as a bug or a bug as a feature to make the problem sound less severe.

in this case, the judgement is plain and simple:
if cisco were bad guys, this is a backdoor.
as cisco are good guys, this is a technician reset.

terminology? what’s the difference?

the difference is that cisco are not bad guys. if they disclosure a problem they should do it fully, because as a client, i am now concerned.

this reminds me of ciscogate but not for obvious reasons. that was a bad event for everybody involved.
it reminds me of the very issue mike lynn discussed:
remote exploitation for cisco is possible, while so far cisco disclosed all these problems as dos vulnerabilities.
i am not saying cisco did that on purpose, but in this case they can set my mind at ease.

why don’t they?


after writing this i’ve been made aware that this product was from a company cisco bought not so long ago. this very same issue happened before (and more than once)… in one recent example with another company cisco bought named riverhead.

it is true cisco’s psirt is one of the best to work with among vendors, even mike lynn said that cisco psirt are some of the more decent people he worked with – “i’ve never had a problem with psirt”.

it is also true that cisco can’t find out about these until after they buy the companies, still, cisco f*cked up, more than just once or twice, and we call it. this kind of a so-called “vulnerability” should not happen, or be disclosed, continually, in this particular fashion.

checking into new investments security-wise, especially with security products and external qa may help solve such issues in the future.

gadi evron,


Payback for Ciscogate – new trend?

on the surface it seems like in recent weeks people started going full-disclosure on cisco, surprising them with vulnerabilities reports on bugtraq and friends. i may be wrong and they knew of these ahead of time… if i am forgive me. it seems like “payback time” or “loss of faith” after “ciscogate”.

this possible trend is more than just disturbing, it’s dangerous to us all when it comes to a company like cisco… whether they “deserve” it or not is irrelevant. they represent most of the internet’s infrastructure and that by itself is a problem.

today when microsoft truly /wants/ to work with researchers (even if sometimes they don’t act it), the main problem they face is that researchers simply don’t believe in them. they are used to hearing things like:
“this is not a vulnerability”
“yes, we are already aware of that” (=and that is why you won’t get credit)
and many other responses, although sometimes people don’t even get a response.

myself, i never had such problems with microsoft and found them very responsive and serious in their replies.. at least in recent years.. but that’s just my personal experience and that doesn’t count. :)

with cisco, it can get worse. researchers may fear that if they do get a response (or work with psirt) it will be with some sort of legal document or a search warrant. still, cisco is responsive and i don’t like much the fact of full disclosure where companies actually handle reports and give due credit to researchers.

i suppose only time will tell where this will end, but it seems that much like predicted by mike lynn, raven alder and myself, exploits with cisco are going to become a very serious concern in the near future for the infrastructure.

i believe people should give cisco psirt a *chance* before going public with vulnerabilities… but if they don’t i suppose cisco and everyone else learned a valuable lesson.

what that lesson may be is a whole different blog entry. not many had a grudge against cisco before ciscogate… and lost faith is very difficult to recover.

gadi evron,


On “Responsible Disclosure”: Stripping the Veil From Corporate Censorship

If you keep up with Microsoft’s Security Advisory releases (most recently Advisory #911302), you’ll note the following disturbingly typical portion:

Microsoft is concerned that this new report of a vulnerability in [insert product] was not disclosed responsibly, potentially putting computer users at risk. We continue to encourage responsible disclosure of vulnerabilities. We believe the commonly accepted practice of reporting vulnerabilities directly to a vendor serves everyone’s best interests. This practice helps to ensure that customers receive comprehensive, high-quality updates for security vulnerabilities without exposure to malicious attackers while the update is being developed.

Microsoft has included such wording in each and every one of its security advisories that is relevant to a public disclosure and will continue to do this for the foreseeable future. It is rapidly becoming evident that what Microsoft defines as “responsible” is “conforming to the company’s wishes”. The language, aside from being overtly hostile toward a number of talented and professional researchers, is a slap in the face dealt to real efforts for “responsible disclosure”. Microsoft’s public claims to a monopoly on the moral standard of “responsibility” not only cost the company a substantial amount of credibility within the community, but also harm the efforts of researchers who seek real reform in the vulnerability disclosure process.

In the case of 911302, the ‘report of a vulnerability’ Microsoft cites is information published by a British firm regarding the Window.OnLoad Race Condition in its Internet Explorer browser. The catch that Microsoft fails to mention? The vulnerability had already been reported publicly after Microsoft discounted it as a non-exploitable flaw. The lag time between the two reports also hurts Microsoft’s case: the issue has been known since May, and the code execution possibility was reported in November.

So, in the case of 911302, Microsoft is complaining because it failed to consider the possibility that a class of race conditions (those that reliably produce calls to free portions of the virtual address space) that has historically proven exploitable would prove equally dangerous in this instance. Microsoft failed to do its homework, and then chastised the British firm (ComputerTerrorism.com) for exposing the company’s gross negligence in its handling of this vulnerability.

While I think CT should have notified Microsoft, its reasons for not doing so are compelling. A large portion of the exploit vector was already publicly known — so much so that CT’s work had probably been accomplished by other malicious actors or was trivially achievable. The malicious members of the community had the same six months that Microsoft had to identify the exploitability of this flaw. As CT’s research illustrates, Microsoft’s disinterest in the flaw was not shared by the community. Therefore, Microsoft’s claims that CT was “irresponsible” (very explicit in its advisory) are brazen at best, flat out wrong at worst.

But Microsoft isn’t the only major corporate organization trying to muzzle researchers by way of public character assassination. Remember Michael Lynn, the researcher sued by Cisco for violating supposed industry standards of “responsible disclosure”? Lynn’s only crime was publishing an exploit for a long-fixed vulnerability in Cisco’s IOS after Cisco failed to acknowledge the hole in release materials for the relevant IOS update.

Remember SnoSoft? The group was threatened with legal action by Hewlett-Packard after exploit code for HP’s software leaked from its laboratories.

When these practices are criminalized, the meaning of “responsible disclosure” has clearly been coopted by corporate interests to mean “what is deemed acceptable by the affected vendor.”

To further illustrate this, I offer you a hypothetical scenario:

A vendor was informed of a vulnerability in its software in early August. The vulnerability was of exceptional severity, and yet the vendor failed to acknowledge this fact. Though a fix was planned, the vendor made no effort to coordinate the release of fixes for different affected products and would offer no immediate timeline for release. In February, 180 days later, the vulnerability is disclosed to the public, with fully-applicable workarounds, in the absence of a vendor-supported fix.

If that vendor were Microsoft, how many people can seriously doubt that we’d be seeing the same exact wording replicated in the advisory on that vulnerability?

The irony of this, of course, is that Microsoft, HP, Cisco, et al, are shooting themselves in the foot. All of those named would do well to give up the deluded vision that the world will soon return to a culture of non-disclosure, granting vendors indefinite timeframes and the absolute freedom to (mis)handle vulnerability information as they choose. History and today’s experience both tell us that trust in vendors on security issues is naive and misplaced.

Unfortunately, the insistence of vendors on using the term “responsible disclosure” as a tool of their hopeless agendas undermines what little hope any of them have to see real reform in the way vulnerability information is handled.

So, if the corporate agenda doesn’t qualify, what is responsible disclosure? What better source for a community standard than CERT. It’s one of the few bodies with some credibility in the research community that is generally respected by vendors. CERT sets a 45 day baseline to disclose vulnerability information. While this is, in practice, rather toothless, I wish CERT would stick to it, and I wish more members of the community would adopt this relatively moderate standard in a more rigid manner than CERT has done.

Using a community clearinghouse as the source of a semi-standard approach to “responsible disclosure” would force vendors to explain why they consider the disclosure policy of an industry leader “irresponsible”, undermine their legal claims and subject them to large amounts of bad press. Vendors who fail to acknowledge this policy as de facto standard could be handled mercilessly by both the community and the legal system, with clear basis in community standard.

In addition to debunking false vendor claims of “irresponsible disclosure”, this standard could also be used to establish community precedent that vendors have an obligation to promptly fix vulnerabilities. Any that choose instead to publicly demonize researchers should face a taste of their own medicine — in the form of lawsuits — for this slanderous conduct.

It is time that the vulnerability disclosure debate moved from special interests into the open community, because it is only then that we can hope for a standard of truly responsible disclosure that offers customers real protection and forces some degree of accountability upon commercial vendors for the effects of their ineffective security processes.


Router worms and International Infrastructure

hello all, this is my first blog posting. i am making it to highlight some concerns i have regarding the security of the internet.

i recently raised this subject on nanog, bugtraq and our own fun list – funsec.

a while back i emailed the following text to a closed mailing list. i
figure now that quite a few cats are out of the bag it is time to get
more public attention to these issues, as the bad guys will very soon
start doing just that.

ciscogate by itself alone, and now even just a story about worms for
routers is enough for us to be clear that worms will start coming out.
we do learn from history.

so.. as much as people don’t like to talk much on the issues involving
the so-called “cooler” stuff that can be done with routers, now is the
time to start.

here is one possible and simple vector of attack that i see happening in
the future. it goes down-hill from there.

i wrote this after the release of “the three vulnerabilities”, a few
months back. now we know one wasn’t even just a ddos, and that changes
the picture a bit.

begin quoted text —–>>>

more on router worms – let’s take down the internet with three public
pocs and some open spybot source code.

people, i have given this some more thought.

let’s forget for a second the fact that these vulnerabilities are
dangerous on their own (although it’s a dos), and consider what a worm,
could cause.

if the worm used the vulnerability, it would shoot itself in the leg as
when network is down, it can’t spread.

now, imagine if a vx-er will use an ancient trick and release the worm,
waiting for it to propagate for 2 or 3 days. then, after that seeding
time when the say.. not very successful worm infected only about 30k
machines around the world, each infected host will send out 3 “one
packet killers” as i like to call them to the world.

even if the packet won’t pass one router, that one router, along with
thousands of others, will die.

further, the latest vulnerabilities are not just for cisco, there is a
“one packer killer” for juniper as well.

so, say this isn’t a 0-day. tier-1 and tier-2 isp’s are patched (great
mechanism to pass through as these won’t filter the packet out if it is
headed somewhere else), how many of the rest will be up to date?

let’s give the internet a lot of credit and say.. 60% (yeah right).

that leaves us with 30% of the internet dead, and that’s really a bad
scenario as someone i know would say.

make each infected system send the one packet spoofed (potentially, not
necessarily these vulnerabilities) and it’s hell. make them send it
every day, once! and the net will keep dying every day for a while.

as a friend suggested, maybe even fragment the packet, and have it
re-assembled at the destination, far-away routers (not sure if that will

these are all basic, actually very basic, techniques, and with the
source to exploits and worms freely available….
we keep seeing network equipment vulnerabilities coming out, and it is a
lot “cooler” to bring down an isp with one packet rather than with

i am sure the guys at cisco gave this some thought, but i don’t believe
this is getting enough attention generally, and especially not with
av-ers. it should.

this may seem like i am hyping the situation, which is well-known. still
well-known or not, secret or not, it’s time we prepared better in a
broader scale.


—–>>> end quoted text.

i would really like to hear some thoughts from the community on
threats such as the one described above. let us not get into an argument
about 0-days and consider how many routers are actually patched the
first… day.. week, month? after a vulnerability is released.

also, let us consider the ever decreasing vulnerability-2-exploit time
of development.

i don’t want the above to sound as fud. my point is not to yell “death
of the internet” but rather to get some people moving on what i believe
to be a threat, and considering it on a broader scale is long over-due.

the cat is out of the bag, as as much as i avoided using “potentially”
and “possibly” above to pass my point.. this is just one possible
scenario and i believe we need to start getting prepared to better
defending the internet as an international infrastructure rather than on an the isp level, locally.

gadi evron,


Secure by default

It’s not often that I buy stuff off the cuff. My buying habits are relatively conservative, and I usually do a lot of research on equipment before I buy it. This Friday was an exception to the rule – when I saw the WRT54GC in Fry’s for $40, I just couldn’t miss out. The device is very slender, very nearly pocket-sized, and has a built-in antenna with a jack for an external one and 5 ethernet ports (1 external).
Wireless technology is in use for nearly a decade now, and securing a wireless network today is relatively easy. Yet as I plug this baby into the socket and hit refresh on the laptop, I see a new network: SSID linksys, channel 6, no encryption. Great. A few tweaks later and the device no longer publishes its SSID (no it’s not linksys anymore), and would only let you connect if you speak WPA2 to it. And ‘admin’ was a lame administrator password anyway.

Here’s a question for you: How many people actually go through the extra few clicks to secure their wireless device? If this device sold only 1000 units, I bet there are now 800 new open wireless networks.

Let’s consider the following imaginary scenario, involving Joe, your average computer user:

  1. Joe buys his new device and connects it to his cable modem, like the manual says
  2. Joe then looks for a wireless network with his laptop. There it is, SSID linksys, no encryption
  3. Joe connects to the unencrypted network and tries to browse the web
  4. Joe’s web connection is hijacked to a local web-server on the device, which asks him for a 6 digit code on a sticker on the device.

Several interesting things can happen now: Maybe Joe can surf the net immediately, while the device sets up a MAC filter for his current MAC address. Not very secure, but it’s better than nothing. Or Joe might have to choose a WPA key, and a small signed Java applet would setup his computer with the new key.

Now I’m not Joe, so maybe my perspective is all skewed. Is it really too much to ask from a user to go through a linear, consistent process before his network is set up, ensuring he is running an encrypted network, or at least MAC-filtered? Is it that much of an annoyance?

Is it more expensive to manufacture? The device already has an individualized sticker on it with the MAC address, I don’t think adding another 6 digits to it is much of a hassle, and the device already has an embedded web server. Yes, some more code.

Disclaimer 1: I know, this is still insecure, because Joe still uses a wireless unencrypted medium to transmit the code. It can be solved with an SSL web server, but even if it’s unencrypted, the window of vulnerability is greatly reduced.

Disclaimer 2: The WRT54GC came with a CD, which I never bothered to take out of its sleeve. I could see no reason to run software on my PC when I could just as well configure the device over the web. Perhaps Joe’s magic one-click access point securifier exists on that CD, and I just didn’t bother to check.

Originaly posted in my blog


Nisco – Connecting people

Nisco was born of two market leaders joining forces. Nokia, the world renowned cellular phone developer and Cisco, the company whose products serve as the Internet’s infrastructure. The aim of Nisco was to connect people in a very similar way to the way computers and other devices have connected into one huge network.

Tele-people device – a device that connects (similarly to phones) a person to a network of other people, made communication between two people instantaneous and seamless. All you needed to do in order to reach someone was to think about them. From that moment any thought that crossed his or her mind would cross yours and vice verse. This brought the community closer to together, and brought a whole new type of entertainment, created new relationships, and changed the meaning of meeting people.

However, as with any other form of new technology, trouble didn’t lag behind. The technology came out roughly seven years ago to this date. When it came out, there was a lot of debate on whether it was or wasn’t dangerous to link people’s minds directly to technology. In the first year of its appearance, the technology was thoroughly tested, and no problems have been found with it. Once the FCC and FDA have granted their approval, people started implanting the device, which in turn generated additional word of mouth – resulting in additional people getting the implants.

It took nearly three years before someone was able to use the technology to harm the consumers. At first, the perpetrator tried to warn people – but was quickly silenced by Nisco’s attorneys and investigated by the FBI on the count of endangering national security. Unfortunately for Nisco, the information on how this harm can be done has quickly spread throughout the network, making anyone with a bit of intelligence capable of modifying his implement to do more than just connect with other people.

What happened next was totally unexpected. The first worm-like modification to the implant was released into the network, infecting millions of people, and placing their device in a constant state of broadcast. This sent their thoughts, feelings, visions, and basically anything that crossed their minds, into the public domain. As this worm continued to modify people, more and more broadcasts of people began to clutter the network, making infected people unable to function, as they were constantly and simultaneously receiving feeds from thousands of sources.

Matters quickly deteriorated, as the first cases of total mental breakdown were reported. A quarter of a million people were no longer functioning – unable to work, eat, or sleep they simply withered and died. In addition, the number of people infected slowly but surely reached the staggering billions, and Nisco stood by and couldn’t believe that something like this could be happening to them. The government of course was quick to react with an investigation, but it was already too late for the people that have got themselves implanted with the Nisco technology. On the one hand, since there was no way of disconnecting them from the network without causing them harm, there was no sure way of protecting them from the spreading worm.

As millions died the court system made hacking (either software or hardware), as illegal as murder, eventually making the death sentence mandatory for such crimes. Unfortunately for the person that released the worm and for the rest of the world, he got infected as well, squashing any prospect that a solution might be found.

The number of new people getting connected to the network dropped to zero. People connected to the network continued to die due to the effects of the worm, and many also decided to take the path of suicide by getting their implants removed. As the number of connected people diminished, so did the number of infections, and eventually the worm was unable to find new candidates for infections and disappeared.

The world remained devastated. Technologies that allowed you to integrate people with machines were rendered illegal and not even rebellious countries like China allowed their researchers to preform academic research in the field. Nisco was taken to court, were it lost and had to pay compensation to millions of people left with their love ones mentally handicapped or all alone as their spouses, children, brothers or sisters died as a result of this complete mental breakdown. These compensations brought Nisco to its knees and the company filled for bankruptcy, which was only thing they could do now that consumers no longer wanted to buy anything related to or manufactured by the once considered giant of the industry, Nisco.

So came to an end the merger between the two market leaders, Cisco and Nokia. People still remember the famous press release that Cisco and Nokia released when they merged – people’s lives will change forever – how right they were and how wrong were we to disregard their statement as just a marketing scheme.


Bad week for Cisco

As if the Lynn fiasco wasn’t enough – now the corporate web site seems to have been breached.

Go to http://www.cisco.com/cgi-bin/login and hit ‘ESC’ when prompted for a username/password. You’ll see the following message:


  • Cisco has determined that Cisco.com password protection has been compromised.
  • As a precautionary measure, Cisco has reset your password. To receive your new password, send a blank e-mail, from the account which you entered upon registration, to cco-locksmith@cisco.com. Account details with a new random password will be e-mailed to you.

Uh uh. That can’t be good.