Most web sites today use one form or another to generate their web site content. Some utilize the “offline” database back-ended approach, where pages are generated every so-often but the web site itself is made of static pages (HTML). Others utilize the “on-line” database back-ended approach, where pages are generated on-the-fly whenever a user requests them.
It is considered harder to hack an “offline” database back-ended web site, as you have no direct way to influence the content displayed by the web site if you send the web site malformed data. However as most webmasters would tell you the “offline” approach is harder to maintain, is slower to adapt to changes in content and requires greater thought into what is placed on-line – as content can take several minutes to hours to propagate into the static web site, that is why most of today’s web sites use the “on-line” approach.
This comes at a price – I will skip the hardware and software aspects – security wise of course. As the web site is built according to user provided data, this opens up the opportunity for the user in this case malicious in nature to manipulate the results returned by the server.
How common is it to see a web site get defaced via an IIS/Apache vulnerability? not very, and it usually occurs due to some newly discovered vulnerability in the mentioned products. How common it is to see a web site get defaced via a Windows/Linux vulnerability, it is roughly the same as seeing an IIS/Apache web site get defaced because of use of an old version of the software.
What is more common? web sites that get defaced due to improper usage of user provided data. These vulnerabilities are usually divided to the following categories:
- Cross Site Scripting
- SQL Injection
- Code Execution
Would it be difficult to detect these vulnerabilities? no, would it be difficult to avoid having them in the first place? no.
Therefore why are these vulnerabilities still present in high profile web sites? I could name a few such web sites, major news agencies and broadcasting networks, but it won’t help the end-user or the web site’s owner. Everyone knows there are numerous solutions of preventing, detecting and stopping these vulnerabilities from happening, so why isn’t it happening?
Are web site vulnerabilities, such as those caused by bad usage of user provided data, considered low risk vulnerabilities? I don’t think these vulnerabilities can be regarded as low risk.
Take this example, I was able in a few minutes of wandering through one of these news agency, which utilizes the unbreakable Oracle database, to discover the complete structure of their articles table/schema as well as read any entry present in the table by utilizing columns such as author, date, priority and keywords – that would be otherwise impossible to use through their normal web access interface.
The next logical step for a hacker discovering this would be to insert or modify an article found in the database, insert into it some form of malicious content – I can name a few: Ad-Ware installing page, fraud related “donation” button, etc. Does this sound factious? nope, it has been done and there is nothing stopping anyone from doing this again.
As history has taught us, these kind of vulnerabilities would go unnoticed until someone will write a worm that would exploit these vulnerabilities to skip from one server to another, which like CodeRed, will create enough havoc to create an understanding by the security community to the importance of addressing such vulnerabilities.
Future NOTE: Even if I say that such a worm will be written, it doesn’t mean I wrote it 🙂