KDE JS bug poses a real threat

(Updated: January 21, 2006 @ 21:19, 21:23)

A security vulnerability in KDE’s JavaScript interpreter allows remote attackers to cause a user visiting a malicious web page to execute arbitrary code by overflowing KJS (KDE-JavaScript) UTF-8 interpreter.

The vulnerability can be triggered by any program that utilizes KJS, i.e. the vulnerability is not limited to Konqueror.

More information to come as technical details start to surface.

The patch found in ftp://ftp.kde.org/pub/kde/security_patches/post-3.2.3-kdelibs-kjs.diff offers some insight into the problem, the vulnerable JavaScript functions appearently are: encodeURI and decodeURI.

Update 2: The CVE-2006-0019 entry has not be released yet, but keep watching.


WINE vulnerable to WMF vulnerability

The vulnerability recently discovered in Windows, and patched just several days ago has been found to be exploitable on WINE based systems, this also includes Crossover Office package.

According to H D Moore, wine-20050930/dlls/gdi/driver.c includes:

Escape [GDI32.@]
INT WINAPI Escape( HDC hdc, INT escape, INT in_count, LPCSTR in_data,
LPVOID out_data )
INT ret;
POINT *pt;

switch (escape)
return AbortDoc( hdc );
[ snip ]
return SetAbortProc( hdc, (ABORTPROC)in_data );
[ snip ]

And wine-20050930/dlls/gdi/printdrv.c includes:

* call_abort_proc16
static BOOL CALLBACK call_abort_proc16( HDC hdc, INT code )
ABORTPROC16 proc16;
DC *dc = DC_GetDCPtr( hdc );

if (!dc) return FALSE;
proc16 = dc->pAbortProc16;
GDI_ReleaseObj( hdc );
if (proc16)
WORD args[2];
DWORD ret;

args[1] = HDC_16(hdc);
args[0] = code;
WOWCallback16Ex( (DWORD)proc16, WCB16_PASCAL, sizeof(args), args,
&ret );
return LOWORD(ret);
return TRUE;

* SetAbortProc (GDI32.@)
INT WINAPI SetAbortProc(HDC hdc, ABORTPROC abrtprc)
DC *dc = DC_GetDCPtr( hdc );

if (!dc) return FALSE;
dc->pAbortProc = abrtprc;
GDI_ReleaseObj( hdc );
return TRUE;

Finally wine-20050930/dlls/gdi/printdrv.c includes:

* EndPage [GDI32.@]
ABORTPROC abort_proc;
INT ret = 0;
DC *dc = DC_GetDCPtr( hdc );
if(!dc) return SP_ERROR;

if (dc->funcs->pEndPage) ret = dc->funcs->pEndPage( dc->physDev );
abort_proc = dc->pAbortProc;
GDI_ReleaseObj( hdc );
if (abort_proc && !abort_proc( hdc, 0 ))
EndDoc( hdc );
ret = 0;
return ret;


Goodbye 2005, welcome 2006 (year statistics)

As 2005 comes to an end, we can look back and try to use that to guess what we would see in 2006 … but lets first summarize what we had:
1) Over 1500 new vulnerability groups (we call them ‘groups’ since we don’t split an SQL injection and its CSS counterpart into two advisories), which is up by roughly 300 comparing to last year.

2) An uproar in exploits (i.e. advisories with little technical details and the majority of it being a PoC or an actual exploit) from 150 to 295.

3) The number of Microsoft related advisories (not just MSXX-XXX) has jumped from 66 to 133, a little bit more than double.

4) IIS related vulnerabilities have declined from 13 to 8.

5) A decrease in the number of Apache related advisories from 23 to 11.

6) The busiest month was May, with over 170 new articles (roughly 6 articles per day, including the weekends).

So what will 2006 bring? my estimate is that we’ll see MORE vulnerabilities. Why? simply because as more software comes into the consumer market, it is more likely that people will find vulnerabilities in them.

As more Web based products emerge, the number of SQL, Directory Traversal, Cross Site Scripting and the like will become the majority of vulnerabilities, while Buffer Overflows and Format Strings becoming the minority.

The number of “Phishing” attacks will greatly increase, and become a lot more clever as the thieves get smarter and the methods become simpler. “Phishing” will also start utilizing more custom made Spyware and exploits, to try and make the victim believe that they are not being “Phished”.


Nessus 3.0 and the trend.

With Nessus 3.0′s promising enhancements and updates, one would normally
rush into updating. Unfortunately, it’s not provided as it used to be.
Only specific Linux distributions and FreeBSD 5/6 have been chosen as the
initial binary releases–only the CVS repository is available. What about
those using Solaris or Windows? Well, they’ll have to wait their turn. It
seems Tenable Network Security is doing what seems to be a scary trend.
Open Source, Tested, Used, Trusted, Mature, Limited Support/Availability.
With Snort, the community rules are horrible, the registered member’s
rules are mediocre and you need to pay for the VRT certified rules. RedHat
did the same thing, but instead of completely telling the open source
community to shove it, they released Fedora. Well, how else are they going
to use the open source community as a test bed? It seems the days of free
speech are coming to an end while the days of free beer are gaining
ground. Money makes the world go ’round but don’t tease us geeks with free
stuff. I hope my view and paranoia is entirely wrong and this is just a
figment of my imagination, else the open source community has a one-way ticket to the history books.


Using Architecture to Avoid Dumb Mistakes

The security report against SunnComm’s MediaMax that I document in my previous blog post seems so amazingly simple: world-writable file permissions. Replace a file and wait for it to be run by an administrative user, then you have total control. Attacks don’t get much more basic, or much more obvious, than that. SunnComm’s example, however, is one of many that illustrate the fact that access control is poorly-understood by developers. Major software firms (including, on occasion, Microsoft itself) have misconfigured access control so badly as to make their products practically give away elevated privileges. Something about this picture has got to change.

SunnComm’s mistake may seem obvious, and it certainly does demonstrate a lack of understanding of multi-user security, but their error is strikingly common:

    C:\Program Files\SomeDeveloper\SomeApp>cacls someapp.exe
    C:\Program Files\SomeDeveloper\SomeApp\SomeApp.exe Everyone:F

I’ve obviously edited this output, but I’ve done so to avoid naming a widely-deployed software application as having a Full Control permission for the Everyone group on all of its files. This blatant disregard for user security may seem pointless, and indeed it is wholly irresponsible. However, it is often justified by concerns such as allowing less-privileged users to install software updates. The problem is, most companies just don’t get that software update deployment is an administrative task for a reason. Namely, that it is simply not possible to trust a user to deploy an update that won’t damage the system.

There are several tasks that applications and system components simply should not trust limited, untrusted users to do. Hence the purpose of the term “untrusted user”. Access control is one of many areas where some actions are such huge, clear-cut mistakes that even attempting them should immediately subject an application to question. Placing ACLs that allow all users to modify objects (essentially unsecuring a secured object) is one of them. I’ve seen many applications try to allow Everyone, world, or equivalent roles modification rights to an object, and I’ve never seen one of them do it securely. Not even Microsoft could get that right, and had to release MS02-064 to advise users to plug a hole in Windows 2000 that was caused by an access control error of its own.

In spite of common knowledge that some security-related actions are potentially dangerous and almost never desired, it is still shockingly easy for an application developer to expose himself/herself to an attack. Windows Vista and Longhorn Server contain API “improvements”, presumably modeled after the simplicity of the access control APIs for Unix-like systems, that will make stripping your system’s security to the bone even easier.

So where are the countermeasures? Though there’s no way to make a system idiot-proof, shouldn’t systems developers work to make being an idiot a little less simple? OpenBSD is notorious for this approach of “secure by default” that ships the user a hardened system. Forcing users to open holes, rather than close them, encourages understanding of the risks involved as well.

A similar approach should be taken to application-level errors. It’s entirely sensible for an Operating System to block obviously dangerous activity. FormatGuard for Immunix is one example of this. That project was amazingly successful — blocking almost 100% of format string attacks with a near-zero false positive rate. Why? Because it blocked the exploitation of an obvious programming error.

This preemptive defense model has a lot of promise, and could just as easily be applied to other security cases. Suspicious behavior like setting world-writable ACLs on critical objects should raise immediate alarms, and systems developers could do quite a bit to facilitate this. Imagine being a developer of an application like MediaMax, when systems begin to trigger on the insecure ACL behavior and display warnings such as:

WARNING: This application is attempting a potentially-dangerous action. Allowing it to continue may expose your system to compromise by malicious users, viruses, or other threats. Are you sure you want to allow this application to continue?

Now, there will be folks that answer ‘Yes’, but the concern prompted by such a warning on the part of others would more than likely force a redesign on the part of SunnComm and vendors like them. In the ideal case, such a warning would expose the vulnerability before the software ever left the lab, rather than months later, when 20 million CDs with insecure code on them have been sold worldwide.

For something like this to be possible, the old notion of application and system as distinct components will have to be abandoned, in favor of a development concept that recognizes the reality that application and system code are dependent upon one another for functionality, including security. Applications should be architected to avoid potential system holes, and systems should be designed with the goal of making it more difficult to create holes in applications.

Unfortunately, both simplicity and complexity contribute, at least in excess, to a lack of security at other levels. Sometimes, a stop-gap measure is necessary to prevent slightly clueless folk from becoming a major risk.


Nematodes Cause Economic Losses

A recently published research paper claims that:
“Nematode infestation on a potato crop results in tuber yield decline and/or reduction in quality, thereby contributing economic loss to the industry”.

How is this related to security?

I can just imagine the same research results but for a different type of Nematodes, Dave Aitel’s Nematode.

Dave Aitel’s Nematodes are designed to be beneficial, exploit a vulnerability in a product, once it has been successfully exploited instead of cause harm cause good – i.e. install a piece of software, initiate an update sequence etc, which in turn will close the security vulnerability.

This approach is both arrogant as well as plain silly. It would be unthinkable to release a weakened small pox virus or even the flue amongst the population just to inoculate them, as you can never control the virus once it has been release. The same goes for such a beneficial Nematode, once it is out of your lab it is out of your hands.

Dave Aitel is not the first person to tackle the idea of beneficial Nematodes, HP has research into the idea and have cooked up solution of Active Countermeasures, which is basically the same idea, release a piece of software that will fix any computer not immune to a certain type of exploit/vulnerability.

It would be sad to see this approach become adopted into the security community, as it means that security community have reached the conclusion that the only way to solve things is by brute force.


Thinking Different II

You probably know the current situation in one way or another:
You see a computer of a a friend (or just someone you know) that is not up to date, (usually it’s so not up to date, that you can see the interface and understand that), and when you give them a “tip” to update their Windows XP, they answer, “I saw the new interface in Windows XP SP2, and I didn’t like it one bit”.

Lets keep this example on Windows for now, because it’s the majority of users these days :( .

Then when you attempt to say something like “but Microsoft fixed a lot of security vulnerabilities”, you either get a response such as “nothing will happen to me” or you lose the conversation, and thats what I’m going to talk about in this blog entry.

I do not like the idea that an OS is binded with its GUI, because the vendor teaches the common users that GUI is the only real thing that is important. Thats true btw for many other OS’s and not just for Microsoft (Mac anyone? maybe you still use BeOS, OS/2 or even KDE/Gnome based Linux?).

The reason for that is simple. In WYSIWYG environments, you do not really know what you are getting… well you never do know what you get, but on GUI, people expect GUI updates. They do not accept that there can be other types of fixes, and they do not understand the importance of these updates.

The most scary part here, is that most of them do not think that they will be vulnerable although they do keep an AntiVirus (usually not 100% up to date), they understand that there is a spyware someplace that can hurt them, and other issues. But still, “If I can not see what was changed, why should I update ?” in the more naive response or “but nothing will happen to me, I’m behind firewall/antivirus/router/Other”.

In order to convince these people I think that we should use exploits that present the user with a GUI notification that they are vulnerable, like an “xmessage” with current user privileges (or use xhost for gaining X running option) on X based OSes, or just a popup dialog that can not be closed, or will appear at “random” :) .
Or just crashing programs and leaving a message in a text file on the desktop “upgrade me” or something similar.

Regardless of April’s fools day where it might be funny to see users suffer, they will also see that they are vulnerable, and be motivated to find a way to fix this problem.

Now all we should do is convince vendors to add this type of features instead of black hats breaking and entering to users’ computers and do what ever they want.


The most secure code in the world

I’m going to say some things, that might be the last thing I’ll ever be able to say (You’ll see why in the next paragraph :) ). Open source is as secure as much as the developers made it secure. It is not more secure then close source, and it’s not better then closed code. It’s merely code !

Most of the open source community (Hey I also develop open source tools and programs) try to sell us that Open = Secure. When Internet Explorer had a lot of security risks one after the other, firefox developers came and told us that in Open source it would have never happen. there are 10000000 (I must have missed few O :) ) eyes on the code so it’s can not be less secure, only more secure….

Ammm.. OK (I’m starting to look for a place to hide right about now :P )

The fact is, that for better, and more secure code, the first thing we have to do, is to educate people to think and be paranoid. Yeah! You can not trust any user input, any result of system function, and you must validate them over and over again.

You must check the input and see that it does not overflow the amount of memory you are willing to give your buffers.

You must sanitize (filter) any char you do not wish to see and have.
And escape anything that you must have, but may effect your program.

But wait, thats still does not give us secure programs and code, only start making us understand better the risks. For example, Off by one can happen to every one… specially after alcohol is involved :)

And what about the user control our function jumps (you know change hard coded our machine code of the program), or inject us with system functions of his like… We can sanitize the input we getting back form the function, but we can not control what happen on the function itself…

Or even bugs that we didn’t thought we had, and someone found them and exploit them. Or as Knuth one said: “I just proved that my claim is right, but I haven’t tested my code with a compiler” (I’m quoting from memory…)

But I just realize that thats not the thing I needed to start with… I should have said, that we are not educated to think in more secure manners. In high schools and universities we are taught to assume that the user input is somewhat correct, and all we need to do is focus on the functionality of the program.
We are also taught that there is only one “right” way to do thing and thats the professor way :)

So before every one starts jumping and accusing something to be more/less secure, lets start teaching people to do things in a more secure way… So how do we start ?


Linux Passes the WGA Test

According to bit-tech.net Linux with Wine passes Microsoft’s WGA without a hitch, does this mean that Microsoft has a soft spot for Linux? – I can’t believe that :)