BadBIOS

In recent days there has been much interest in the “BadBIOS” infection being reported by Dragos Ruiu.  (The best overview I’ve seen has been from Naked Security.)  But to someone who has lived through several viral myths and legends, parts of it sound strange.

  • It is said to infect the low-level system firmware of your computer, so it can’t be removed or disabled simply by rebooting.

These things, of course, have been around for a while, so that isn’t necessarily wrong.  However, BIOS infectors never became a major vector.

  • It is said to include components that work at the operating system level, so it affects the high-level operation of your computer, too.
  • It is said to be multi-platform, affecting at least Windows, OS X, and OpenBSD systems.

This sounds bit odd, but we’ve had cross-platform stuff before.  But they never became major problems either.

  • It is said to prevent infected systems being booted from CD drives.

Possible: we’ve seen similar effects over the years, both intentionally and un.

  • It is said to spread itself to new victim computers using Software Defined Radio (SDR) program code, even with all wireless hardware removed.

OK, it’s dangerous to go out on a limb when you haven’t seen details and say something can’t happen, but I’m calling bullshit on this one.  Not that I don’t think someone couldn’t create a communications channel without the hardware: anything the hardware guys can do the software guys can emulate, and vice versa.  However, I can’t see getting an infection channel this way, at least without some kind of minimal infection first.  (It is, of course, possible that the person doing the analysis may have made a mistake in what they observed, or in the reporting of it.)

  • It is said to spread itself to new victim computers using the speakers on an infected device to talk to the microphone on an uninfected one.

As above.

  • It is said to infect simply by plugging in a USB key, with no other action required.

We’ve seen that before.

  • It is said to infect the firmware on USB sticks.

Well, a friend has built a device to blow off dangerous firmware on USB sticks, so I don’t see that this would present any problem.

  • It is said to render USB sticks unusable if they aren’t ejected cleanly; these sticks work properly again if inserted into an infected computer.

Reminds me somewhat of the old “fast infectors” of the early 90s.  They had unintended effects that actually made the infections easy to remove.

  • It is said to use TTF (font) files, apparently in large numbers, as a vector when spreading.

Don’t know details of the internals of TTF files, but they should certainly have enough space.

  • It is said to block access to Russian websites that deal with reflashing software.

Possible, and irrelevant unless we find out what is actually true.

  • It is said to render any hardware used in researching the threat useless for further testing.

Well, anything that gets reflashed is likely to become unreliable and untrustworthy …

  • It is said to have first been seen more than three years ago on a Macbook.

And it’s taken three years to get these details?  Or get a sample to competent researchers?  Or ask for help?  This I find most unbelievable.

In sum, then, I think this might be possible, but I strongly suspect that it is either a promotion for PacSec, or a promo for some presentation on social engineering.

 

Share

Someone always checks up on you

I would like to start by thanking Smit Bharatkumar Shah from http://about.me/smitbshah for bringing to our attention that our site has a potential security vulnerability that could be used by malicious attackers to preform phishing and/or clickjacking attacks. With his help we were able to prevent this attack from occurring. No customers have been affected by this issue.

Our ScanMyServer.com service has been providing security scan reports and vulnerability information for sites from all over the world; but we did however neglect to do one small thing, which is scan our web site with the same service. If we had, ScanMyServer.com would have shown us of the potential issue. How embarrassing is that?!

We have checked our logs for any sign that the vulnerability has been exploited or our customers have been misused but nothing came out. Due to the nature of this issue, any attack would have been recorded in the logs.

The solution for the above mentioned vulnerability is a simple two step fix:
1) Run:
a2enmod headers

2) Add to /etc/apache2/conf.d/security the following line:
Header always append X-Frame-Options SAMEORIGIN

If any of you finds any other issues in our site, please contact us at support@beyondsecurity.com and we will be happy to credit you with the find. Thanks for making our service better!

Share

It’s What’s on the Inside that Counts

The last time I checked, the majority of networking and security professionals were still human.

We all know that the problem with humans is that they sometimes exhibit certain behaviors that can lead to trouble – if that wasn’t the case we’d probably all be out of a job! One such behavior is obsession.

Obsession can be defined as an idea or thought that continually preoccupies or intrudes on a person’s mind. I’ve worked with a number of clients who have had an obsession that may, as bizarrely as it seems, have had a negative impact on their information security program.

The obsession I speak of is the thought of someone “breaking in” to their network from the outside.

You’re probably thinking to yourself, how on earth can being obsessed with protecting your network from external threats have a negative impact on your security? If anything it’s probably the only reason you’d want a penetration test in the first place! I’ll admit, you’re correct about that, but allow me to explain.

Every organization has a finite security budget. How they use that budget is up to them, and this is where the aforementioned obsession can play its part. If I’m a network administrator with a limited security budget and all I think about is keeping people out of my network, my shopping list will likely consist of edge firewalls, web-application firewalls, IDS/IPS and a sprinkling of penetration testing.

If I’m a pen tester working on behalf of that network administrator I’ll scan the network and see a limited number of open ports thanks to the firewall, trigger the IPS, have my SQL injection attempts dropped by the WAF and generally won’t be able to get very far. Then my time will be up, I’ll write a nice report about how secure the network is and move on. Six or twelve months later, I’ll do exactly the same test, find exactly the same things and move on again. This is the problem. It might not sound like a problem, but trust me, it is. Once we’ve gotten to this point, we’ve lost sight of the reason for doing the pen test in the first place.

The test is designed to be a simulation of an attack conducted by a malicious hacker with eyes only for the client. If a hacker is unable to break into the network from the outside, chances are they won’t wait around for a few months and try exactly the same approach all over again. Malicious hackers are some of the most creative people on the planet. If we really want to do as they do, we need to give our testing a creativity injection. It’s our responsibility as security professionals to do this, and encourage our clients to let us do it.

Here’s the thing, because both pen testers and clients have obsessed over how hackers breaking into stuff for so long, we’ve actually gotten a lot better at stopping them from doing so. That’s not to say that there will never be a stray firewall rule that gives away a little too much skin, or a hastily written piece of code that doesn’t validate input properly, but generally speaking “breaking in” is no longer the path of least resistance at many organizations – and malicious hackers know it. Instead “breaking out” of a network is the new route of choice.

While everyone has been busy fortifying defenses on the way in to the network, traffic on the way out is seldom subject to such scrutiny – making it a very attractive proposition to an attacker. Of course, the attacker still has to get themselves into position behind the firewall to exploit this – but how? And how can we simulate it in a penetration test?

What the Pen Tester sees

The Whole Picture

On-Site Testing

There is no surer way of getting on the other side of the firewall than to head to your clients office and plugging directly into their network. This isn’t a new idea by any means, but it’s something that’s regularly overlooked in favor of external or remote testing. The main reason for this of course is the cost. Putting up a tester for a few nights in a hotel and paying travel expenses can put additional strain on the security budget. However, doing so is a hugely valuable exercise for the client. I’ve tested networks from the outside that have shown little room for enumeration, let alone exploitation. But once I headed on-site and came at those networks from a different angle, the angle no one ever thinks of, I had trouble believing they were the same entity.

To give an example, I recall doing an on-site test for a client who had just passed an external test with flying colors. Originally they had only wanted the external test, which was conducted against a handful of IPs. I managed to convince them that in their case, the internal test would provide additional value. I arrived at the office about an hour and a half early, I sat out in the parking lot waiting to go in. I fired up my laptop and noticed a wireless network secured with WEP, the SSID was also the name of the client. You can probably guess what happened next. Four minutes later I had access to the network, and was able to compromise a domain controller via a flaw in some installed backup software. All of this without leaving the car. Eventually, my point of contact arrived and said, “So are you ready to begin, or do you need me to answer some questions first?” The look on his face when I told him that I’d actually already finished was one that I’ll never forget. Just think, had I only performed the external test, I would have been denied that pleasure. Oh, and of course I would have never picked up on the very unsecure wireless network, which is kind of important too.

This is just one example of the kind of thing an internal test can uncover that wouldn’t have even been considered during an external test. Why would an attacker spend several hours scanning a network range when they could just park outside and connect straight to the network?

One of my favorite on-site activities is pretending I’m someone with employee level access gone rogue. Get on the client’s standard build machine with regular user privileges and see how far you can get on the network. Can you install software? Can you load a virtual machine? Can you get straight to the internet, rather than being routed through a proxy? If you can, there are a million and one attack opportunities at your fingertips.

The majority of clients I’ve performed this type of test for hugely overestimated their internal security. It’s well documented that the greatest threat comes from the inside, either on purpose or by accident. But of course, everyone is too busy concentrating on the outside to worry about what’s happening right in front of them.

Good – Networks should be just as hard to break out of, as they are to break in to.

Fortunately, some clients are required to have this type of testing, especially those in government circles. In addition, several IT security auditing standards require a review of internal networks. The depth of these reviews is sometimes questionable though. Auditors aren’t always technical people, and often the review will be conducted against diagrams and documents of how the system is supposed to work, rather than how it actually works. These are certainly useful exercises, but at the end of the day a certificate with a pretty logo hanging from your office wall won’t save you when bad things happen.

Remote Workers

Having a remote workforce can be a wonderful thing. You can save a bunch of money by not having to maintain a giant office and the associated IT infrastructure. The downside of this is that in many organizations, the priority is getting people connected and working, rather than properly enforcing security policy. The fact is that if you allow someone to connect remotely into the heart of your network with a machine that you do not have total control over, your network is about as secure as the internet. You are in effect extending your internal network out past the firewall to the unknown. I’ve seen both sides of the spectrum, from an organization that would only allow people to connect in using routers and machines that they configured and installed, to an organization that provided a link to VPN client and said “get on with it”.

I worked with one such client who was starting to rely on remote workers more and more, and had recognized that this could introduce a security problem. They arranged for me to visit the homes of a handful of employees and see if I could somehow gain access to the network’s internal resources. The first employee I visited used his own desktop PC to connect to the network. He had been issued a company laptop, but preferred the big screen, keyboard and mouse that were afforded to him by his desktop. The machine had no antivirus software installed, no client firewall running and no disk encryption. This was apparently because all of these things slowed it down too much. Oh, but it did have a peer-to-peer file sharing application installed. No prizes for spotting the security risks here.

In the second home I visited, I was pleased to see the employee using her company issued XP laptop. Unfortunately she was using it on her unsecured wireless network. To demonstrate why this was a problem, I joined my testing laptop to the network, fired up a Metasploit session and hit the IP with my old favorite, the MS08-067 NetAPI32.dll exploit module. Sure enough, I got a shell, and was able to pivot my way into the remote corporate network. It was at this point that I discovered the VPN terminated in a subnet with unrestricted access to the internal server subnet. When I pointed out to the client that there really should be some sort of segregation between these two areas, I was told that there was. “We use VLAN’s for segregation”, came the response. I’m sure that everyone reading this will know that segregation using VLAN’s, at least from a security point of view, is about as useful as segregating a lion from a Chihuahua with a piece of rice paper. Ineffective, unreliable and will result in an unhappy ending.

Bad – The VPN appliance is located in the core of the network.

Social Engineering

We all know that this particular activity is increasing in popularity amongst our adversaries, so why don’t we do it more often as part of our testing? Well, simply put, a lot of the time this comes down to politics. Social engineering tests are a bit of a touchy subject at some organizations, who fear a legal backlash if they do anything to blatantly demonstrate how their own people are subject to the same flaws as the seven billion other on the planet. I’ve been in scoping meetings when as soon as the subject of social engineering has come up, I’m stared at harshly and told in no uncertain terms, “Oh, no way, that’s not what we want, don’t do that.” But why not do it? Don’t you think a malicious hacker would? You’re having a pen test right? Do you think a malicious hacker would hold off on social engineering because they haven’t gotten your permission to try it? Give me a break.

On the other hand, I’ve worked for clients who have recognized the threat of social engineering as one of the greatest to their security, and relished at the opportunity to have their employees tested. Frequently, these tests result in a greater than 80% success rate. So how are they done?

Well, they usually start off with the tester registering a domain name which is extremely similar to the client’s. Maybe with one character different, or a different TLD (“.net” instead of “.com” for example).

The tester’s next step would be to set up a website that heavily borrows CSS code from the client’s site. All it needs is a basic form with username and password fields, as well as some server side coding to email the contents of the form to the tester upon submission.

With messages like this one in an online meeting product, it’s no wonder social engineering attacks are so successful.

Finally, the tester will send out an email with some half-baked story about a new system being installed, or special offers for the employee “if you click this link and login”. Sit back and wait for the responses to come in. Follow these basic steps and within a few minutes, you’ve got a username, password and employee level access. Now all you have to do is find a way to use that to break out of the network, which won’t be too difficult, because everyone will be looking the other way.

Conclusion

The best penetration testers out there are those who provide the best value to the client. This doesn’t necessarily mean the cheapest or quickest. Instead it’s those who make the most effective use of their relatively short window of time, and any other limitations they face to do the job right. Never forget what that job is, and why you are doing it. Sometimes we have to put our generic testing methodologies aside and deliver a truly bespoke product. After all, there is nothing more bespoke than a targeted hacking attack, which can come from any direction. Even from the inside.

Share

Risk management and security theatre

Bruce Schneier is often outrageous, these days, but generally worth reading.  In a piece for Forbes in late August, he made the point that, due to fear and the extra trouble casued by TSA regulations, more people were driving rather than flying, and, thus, more people were dying.

“The inconvenience of extra passenger screening and added costs at airports after 9/11 cause many short-haul passengers to drive to their destination instead, and, since airline travel is far safer than car travel, this has led to an increase of 500 U.S. traffic fatalities per year.”

So, by six years after the event, the TSA had killed more US citizens than had the terrorists.  And continues to kill them.

Given the recent NSA revelations, I suppose this will sound like more US-bashing, but I don’t see it that way.  It’s another example of the importance of *real* risk management, taking all factors into account.

Share

“Poor” decisions in management?

I started reading this article just for the social significance.  You’ve probably seen reports of it: it’s been much in the media.

However, I wasn’t very far in before I came across a statement that seems to have a direct implication to all business management, and, in particular, the CISSP:

“The authors gathered evidence … and found that just contemplating a projected financial decision impacted performance on … reasoning tests.”

As soon as I read that, I flashed on the huge stress we place on cost/benefit analysis in the CISSP exam.  And, of course, that extends to all business decisions: everything is based on “the bottom line.”  Which would seem to imply that hugely important corporate and public policy decisions are made on the worst possible basis and in the worst possible situation.

(That *would* explain a lot about modern business, policy, and economics.  And maybe the recent insanity in the US Congress.)

Other results seem to temper that statement, and, unfortunately, seem to support wage inequality and the practice of paying obscene wages to CEOs and directors: “… low-income people asked to ponder an expensive car repair did worse on cognitive-function tests than low-income people asked to consider cheaper repairs or than higher-income people faced with either scenario.”

But it does make you think …

Share

Google’s “Shared Endorsements”

A lot of people are concerned about Google’s new “Shared Endorsements” scheme.

However, one should give credit where credit is due.  This is not one of Facebook’s functions, where, regardless of what you’ve set or unset in the past, every time they add a new feature it defaults to “wide open.”  If you have been careful with your Google account in the past, you will probably find yourself still protected.  I’m pretty paranoid, but when I checked the Shared Endorsements setting page on my accounts, and the “Based upon my activity, Google may show my name and profile photo in shared endorsements that appear in ads” box is unchecked on all of them.  I can only assume that it is because I’ve been circumspect in my settings in the past.

Share

“Identity Theft” of time

I really should know better.

Last night, hoping that, in two hours, Hollywood might provide *some* information on an important topic, even if limited, I watched “Identity Thief,” a movie put out by Universal in 2013, starring Jason Bateman and Melissa McCarthy.

It is important to point out to people that, if someone phones you up and offers you a free service to protect you from identity theft, it is probably not a good idea to give them your name, date of birth, social security/insurance number, credit card and bank account numbers, and basically everything else about you.  This tip is provided in the first thirty seconds of the film.  After that (except for the point that the help law enforcement might be able to give you is limited) it’s all downhill.  The plot is ridiculous (even for a comedy), the characters somewhat uneven, the situations crude, the relationship unlikely, the language profane, and the legalities extremely questionable.

(The best line in the entire movie is: Sandy – “Do you know what a sociopath is?” Diane – “Do they like ribs?”  I know this may not seem funny, but trust me: it gives you a very good idea of how humorous this movie really is.)

Share

The Common Vulnerability Scoring System

Introduction

This article presents the Common Vulnerability Scoring System (CVSS) Version 2.0, an open framework for scoring IT vulnerabilities. It introduces metric groups, describes base metrics, vector, and scoring. Finally, an example is provided to understand how it works in practice. For a more in depth look into scoring vulnerabilities, check out the ethical hacking course offered by the InfoSec Institute.

Metric groups

There are three metric groups:

I. Base (used to describe the fundamental information about the vulnerability—its exploitability and impact).
II. Temporal (time is taken into account when severity of the vulnerability is assessed; for example, the severity decreases when the official patch is available).
III. Environmental (environmental issues are taken into account when severity of the vulnerability is assessed; for example, the more systems affected by the vulnerability, the higher severity).

This article is focused on base metrics. Please read A Complete Guide to the Common Vulnerability Scoring System Version 2.0 if you are interested in temporal and environmental metrics.

Base metrics

There are exploitability and impact metrics:

I. Exploitability

a) Access Vector (AV) describes how the vulnerability is exploited:
- Local (L)—exploited only locally
- Adjacent Network (A)—adjacent network access is required to exploit the vulnerability
- Network (N)—remotely exploitable

The more remote the attack, the more severe the vulnerability.

b) Access Complexity (AC) describes how complex the attack is:
- High (H)—a series of steps needed to exploit the vulnerability
- Medium (M)—neither complicated nor easily exploitable
- Low (L)—easily exploitable

The lower the access complexity, the more severe the vulnerability.

c) Authentication (Au) describes the authentication needed to exploit the vulnerability:
- Multiple (M)—the attacker needs to authenticate at least two times
- Single (S)—one-time authentication
- None (N)—no authentication

The lower the number of authentication instances, the more severe the vulnerability.

II. Impact

a) Confidentiality (C) describes the impact of the vulnerability on the confidentiality of the system:
- None (N)—no impact
- Partial (P)—data can be partially read
- Complete (C)—all data can be read

The more affected the confidentiality of the system is, the more severe the vulnerability.

+b) Integrity (I) describes an impact of the vulnerability on integrity of the system:
- None (N)—no impact
- Partial (P)—data can be partially modified
- Complete (C)—all data can be modified

The more affected the integrity of the system is, the more severe the vulnerability.

c) Availability (A) describes an impact of the vulnerability on availability of the system:
- None (N)—no impact
- Partial (P)—interruptions in system’s availability or reduced performance
- Complete (C)—system is completely unavailable

The more affected availability of the system is, the more severe the vulnerability.

Please note the abbreviated metric names and values in parentheses. They are used in base vector description of the vulnerability (explained in the next section).

Base vector

Let’s discuss the base vector. It is presented in the following form:

AV:[L,A,N]/AC:[H,M,L]/Au:[M,S,N]/C:[N,P,C]/I:[N,P,C]/A:[N,P,C]

This is an abbreviated description of the vulnerability that brings information about its base metrics together with metric values. The brackets include possible metric values for given base metrics. The evaluator chooses one metric value for every base metric.

Scoring

The formulas for base score, exploitability, and impact subscores are given in A complete Guide to the Common Vulnerability Scoring System Version 2.0 [1]. However, there in no need to do the calculations manually. There is a Common Vulnerability Scoring System Version 2 Calculator available. The only thing the evaluator has to do is assign metric values to metric names.

Severity level

The base score is dependent on exploitability and impact subscores; it ranges from 0 to 10, where 10 means the highest severity. However, CVSS v2 doesn’t transform the score into a severity level. One can use, for example, the FortiGuard severity level to obtain this information:

FortiGuard severity level CVSS v2 score
Critical 9 – 10
High 7 – 8.9
Medium 4 – 6.9
Low 0.1 – 3.9
Info 0

Putting the pieces together

An exemplary vulnerability in web application is provided to better understand how Common Vulnerability Scoring System Version 2.0 works in practice. Please keep in mind that this framework is not limited to web application vulnerabilities.

Cross-site request forgery in admin panel allows adding a new user and deleting an existing user or all users.

Let’s analyze first the base metrics together with the resulting base vector:

Access Vector (AV): Network (N)
Access Complexity (AC): Medium (M)
Authentication (Au): None (N)

Confidentiality (C): None (N)
Integrity (I): Partial (P)
Availability (A): Complete (C)

Base vector: (AV:N/AC:M/Au:N/C:N/I:P/A:C)

Explanation: The admin has to visit the attacker’s website for the vulnerability to be exploited. That’s why the access complexity is medium. The website of the attacker is somewhere on the Internet. Thus the access vector is network. No authentication is required to exploit this vulnerability (the admin only has to visit the attacker’s website). The attacker can delete all users, making the system unavailable for them. That’s why the impact of the vulnerability on the system’s availability is complete. Deleting all users doesn’t delete all data in the system. Thus the impact on integrity is partial. Finally, there is no impact on the confidentiality of the system provided that added user doesn’t have read permissions on default.

Let’s use the Common Vulnerability Scoring System Version 2 Calculator to obtain the subscores (exploitability and impact) and base score:

Exploitability subscore: 8.6
Impact subscore: 7.8
Base score: 7.8

Let’s transform the score into a severity level according to FortiGuard severity levels:

FortiGuard severity level: High

Summary

This article described an open framework for scoring IT vulnerabilities—Common Vulnerability Scoring System (CVSS) Version 2.0. Base metrics, vector and scoring were presented. An exemplary way of transforming CVSS v2 scores into severity levels was described (FortiGuard severity levels). Finally, an example was discussed to see how all these pieces work in practice.

Dawid Czagan is a security researcher for the InfoSec Institute and the Head of Security Consulting at Future Processing.

Share

Bank of Montreal online banking insecurity

I’ve had an account with the Bank of Montreal for almost 50 years.

I’m thinking that I may have to give it up.

BMO’s online banking is horrendously insecure.  The password is restricted to six characters.  It is tied to telephone banking, which means that the password is actually the telephone pad numeric equivalent of your password.  You can use that numeric equivalent or any password you like that fits the same numeric equivalent.  (Case is, of course, completely irrelevant.)

My online access to the accounts has suddenly stopped working.  At various times, over the years, I have had problems with the access and had to go to the bank to find out why.  The reasons have always been weird, and the process of getting access again convoluted.  At present I am using, for access, the number of a bank debit card that I never use as a debit card.  (Or even an ATM card.)  The card remains in the file with the printed account statements.

Today when I called about the latest problem, I had to run through the usual series of inane questions.  Yes, I knew how long my password had to be.  Yes, I knew my password.  Yes, it was working until recently.  No, it didn’t work on online banking.  No, it didn’t work on telephone banking.

The agent (no, sorry, “service manager,” these days) was careful to point out that he was *not* going to ask me for my password.  Then he set up a conference call with the online banking system, and had me key in my password over the phone.

(OK, it’s unlikely that even a trained musician could catch all six digits from the DTMF tones on one try.  But a machine could do it easily.)

After all that, the apparent reason for the online banking not working is that the government has mandated that all bank cards now be chipped.  So, without informing me, and without sending me a new card, the bank has cancelled my access.  ( I suppose that is secure.  If you are not counting on availability, or access to audit information.)

(I also wonder, if that was the reason, why the “service manager” couldn’t just look up the card number and determine that the access had been cancelled, rather than having me try to sign in.)

I’ll probably go and close my account this afternoon.

Share

YASCCL (Yet Another Stupid Computer Crime Law)

Over the years I have seen numerous attempts at addressing the serious problems in computer crime with new laws.  Well-intentioned, I know, but all too many of these attempts are flawed.  The latest is from Nova Scotia:

Bill 61
Commentary

“The definition of cyberbullying, in this particular bill, includes “any electronic communication” that ”ought reasonably be expected” to “humiliate” another person, or harm their “emotional well-being, self-esteem or reputation.””

Well, all I can say is that everyone in this forum better be really careful what they say about anybody else.

(Oh, $#!+.  Did I just impugn the reputation of the Nova Scotia legislature?)

Share

Outsourcing, and rebranding, (national) security

I was thinking about the recent trend, in the US, for “outsourcing” and “privatization” of security functions, in order to reduce (government) costs.  For example, we know, from the Snowden debacle, that material he, ummm, “obtained,” was accessed while he was working for a contractor that was working for the NSA.  The debacle also figured in my thinking, particularly the PR fall-out and disaster.

Considering both these trends; outsourcing and PR, I see an opportunity here.  The government needs to reduce costs (or increase revenue).  At the same time, there needs to be a rebranding effort, in order to restore tarnished images.

Sports teams looking for revenue (or cost offsets) have been allowing corporate sponsors to rename, or “rebrand,” arenas.  Why not allow corporations to sponsor national security programs, and rebrand them?

For example: PRISM has become a catch-phrase for all that is wrong with surveillance of the general public.  Why not allow someone like, say, DeBeers to step in.  For a price (which would offset the millions being paid to various tech companies for “compliance”) it could be rebranded as DIAMOND, possibly with a new slogan like “A database is forever!”

(DeBeers is an obvious sponsor, given the activities of NSA personnel in regard to love interests.)

I think the possibilities are endless, and should be explored.

Share

Hardening guide for Postfix 2.x

  1. Make sure the Postfix is running with non-root account:
    ps aux | grep postfix | grep -v '^root'
  2. Change permissions and ownership on the destinations below:
    chmod 755 /etc/postfix
    chmod 644 /etc/postfix/*.cf
    chmod 755 /etc/postfix/postfix-script*
    chmod 755 /var/spool/postfix
    chown root:root /var/log/mail*
    chmod 600 /var/log/mail*
  3. Edit using VI, the file /etc/postfix/main.cf and add make the following changes:
    • Modify the myhostname value to correspond to the external fully qualified domain name (FQDN) of the Postfix server, for example:
      myhostname = myserver.example.com
    • Configure network interface addresses that the Postfix service should listen on, for example:
      inet_interfaces = 192.168.1.1
    • Configure Trusted Networks, for example:
      mynetworks = 10.0.0.0/16, 192.168.1.0/24, 127.0.0.1
    • Configure the SMTP server to masquerade outgoing emails as coming from your DNS domain, for example:
      myorigin = example.com
    • Configure the SMTP domain destination, for example:
      mydomain = example.com
    • Configure to which SMTP domains to relay messages to, for example:
      relay_domains = example.com
    • Configure SMTP Greeting Banner:
      smtpd_banner = $myhostname
    • Limit Denial of Service Attacks:
      default_process_limit = 100
      smtpd_client_connection_count_limit = 10
      smtpd_client_connection_rate_limit = 30
      queue_minfree = 20971520
      header_size_limit = 51200
      message_size_limit = 10485760
      smtpd_recipient_limit = 100
  4. Restart the Postfix daemon:
    service postfix restart

The article can also be found at: http://security-24-7.com/hardening-guide-for-postfix-2-x

Share

Hardening guide for BIND9 (Debian platform)

  1. Make sure the Bind is running with non-root account:
    ps aux | grep bind | grep -v '^root'
  2. Change permissions and ownership on the destinations below:
    chown -R root:bind /etc/bind
    chown root:bind /etc/bind/named.conf*
    chmod 640 /etc/bind/named.conf*
  3. Edit using VI, the file /etc/bind/named.conf.options and add the following settings under the “Options” section:
    • Add the line below to replace DNS version banner:
      version "Secured DNS server";
      Note: In-order to test, run the command below:
      dig +short @localhost version.bind chaos txt
    • Add the line below to restrict recursive queries to trusted clients:
      allow-recursion { localhost; 192.168.0.0/24; };
      Note 1: Replace 192.168.0.0/24 with the trusted internal segments and subnet mask.
      Note 2: In-order to test, run the command below:
      nslookup www.google.com <BIND_DNS_Server_IP>
    • Add the line below to restrict query origins to trusted clients:
      allow-query { localhost; 192.168.0.0/24; };
      Note: Replace 192.168.0.0/24 with the trusted internal segments and subnet mask.
    • Add the line below to Nameserver ID:
      server-id none;
    • Add the line below to restrict which hosts can perform zone transfers:
      allow-transfer { 192.168.1.1; };
      Note: Replace 192.168.1.1 with the trusted DNS server.
    • Add the line below to restrict the DNS server to listen to specific interfaces:
      listen-on port 53 { 127.0.0.1; 192.168.1.1; };
      Note: Replace 192.168.1.1 with the IP address of the DNS server.
  4. Restart the DNS daemon:
    service bind9 restart

The article can also be found at: http://security-24-7.com/hardening-guide-for-bind9-debian-platform/

Share

Has your email been “hacked?”

I got two suspicious messages today.  They were identical, and supposedly “From” two members of my extended family, and to my most often used account, rather than the one I use as a spam trap.  I’ve had some others recently, and thought it a good opportunity to write up something on the general topic of email account phishing.

The headers are no particular help: the messages supposedly related to a Google Docs document, and do seem to come from or through Google.  (Somewhat ironically, at the time the two people listed in these messages might have been sharing information with the rest of us in the family in this manner.  Be suspicious of anything you receive over the Internet, even if you think it might relate to something you are expecting.)

The URLs/links in the message are from TinyURL (which Google wouldn’t use) and, when resolved, do not actually go to Google.  They seem to end up on a phishing site intended to steal email addresses.  It had a Google logo at the top, and asked the user to “sign in” with email addresses (and passwords) from Gmail, Yahoo, Hotmail, and a few other similar sites.  (The number of possible Webmail sites should be a giveaway in itself: Google would only be interested in your Google account.)

Beware of any messages you receive that look like this:

——- Forwarded message follows ——-
Subject:            Important Documents
Date sent:          Mon, 5 Aug 2013 08:54:26 -0700
From:               [a friend or relative]

*Hello,*
*
How are you doing today? Kindly view the documents i uploaded for you using
Google Docs CLICK HERE <hxxp://tinyurl.com/o2vlrxx>.
——- End of forwarded message ——-

That particular site was only up briefly: 48 hours later it was gone.  This tends to be the case: these sites change very quickly.  Incidentally, when I initially tested it with a few Web reputation systems, it was pronounced clean by all.

This is certainly not the only type of email phishing message: a few years ago there were rafts of messages warning you about virus, spam, or security problems with your email account.  Those are still around: I just got one today:

——- Forwarded message follows ——-
From:               ”Microsoft HelpDesk” <microsoft@helpdesk.com>
Subject:            Helpdesk Mail Box Warning!!!
Date sent:          Wed, 7 Aug 2013 15:56:35 -0200

Helpdesk Mail Support require you to re-validate your Microsoft outlook mail immediately by clicking: hxxp://dktxxxkgek.webs.com/

This Message is From Helpdesk. Due to our latest IP Security upgrades we have reason to believe that your Microsoft outlook mail account was accessed by a third party. Protecting the security of your Microsoft outlook mail account is our primary concern, we have limited access to sensitive Microsoft outlook mail account features.

Failure to re-validate, your e-mail will be blocked in 24 hours.

Thank you for your cooperation.

Help Desk
Microsoft outlook Team
——- End of forwarded message ——-

Do you really think that Microsoft wouldn’t capitalize its own Outlook product?

(Another giveaway on that particular one is that it didn’t come to my Outlook account, mostly because I don’t have an Outlook account.)

(That site was down less than three hours after I received the email.

OK, so far I have only been talking about things that should make you suspicious when you receive them.  But what happens if and when you actually follow through, and get hit by these tricks?  Well, to explain that, we have to ask why the bad guys would want to phish for your email account.  After all, we usually think of phishing in terms of bank accounts, and money.

The blackhats phishing for email accounts might be looking for a number of things.  First, they can use your account to send out spam, and possibly malicious spam, at that.  Second, they can harvest email addresses from your account (and, in particular, people who would not be suspicious of a message when it comes “From:” you).  Third, they might be looking for a way to infect or otherwise get into your computer, using your computer in a botnet or for some other purpose, or stealing additional information (like banking information) you might have saved.  A fourth possibility, depending upon the type of Webmail you have, is to use your account to modify or create malicious Web pages, to serve malware, or do various types of phishing.

What you have to do depends on what it was the bad guys were after in getting into your account.

If they were after email addresses, it’s probably too late.  They have already harvested the addresses.  But you should still change your password on that account, so they won’t be able to get back in.  And be less trusting in future.

The most probable thing is that they were after your account in order to use it to send spam.  Change your password so that they won’t be able to send any more.  (In a recent event, with another relative, the phishers had actually changed the password themselves.  This is unusual, but it happens.  In that case, you have to contact the Webmail provider, and get them to reset your password for you.)  The phishers have probably also sent email to all of your friends (and everyone in your contacts or address list), so you’d better send a message around, ‘fess up to the fact that you’ve been had, and tell your friends what they should do.  (You can point them at this posting.)  Possibly in an attempt to prevent you from finding out that your account has been hacked, the attackers often forward your email somewhere else.  As well as changing your password, check to see if there is any forwarding on your account, and also check to see if associated email addresses have been changed.

It’s becoming less likely that the blackhats want to infect your computer, but it’s still possible.  In that case, you need to get cleaned up.  If you are running Windows, Microsoft’s (free!) program Microsoft Security Essentials (or MSE) does a very good job.  If you aren’t, or want something different, then Avast, Avira, Eset, and Sophos have products available for free download, and for Windows, Mac, iPhone, and Android.  (If you already have some kind of antivirus program running on your machine, you might want to get these anyway, because yours isn’t working, now is it?)

(By the way, in the recent incident, both family members told me that they had clicked on the link “and by then it was too late.”  They were obviously thinking of infection, but, in fact, that particular site wasn’t set up to try and infect the computer.  When they saw the page asked for their email addresses and password, it wasn’t too late.  if they had stopped at that point, and not entered their email addresses and passwords, nothing would have happened!  Be aware, and a bit suspicious.  It’ll keep you safer.)

When changing your password, or checking to see if your Web page has been modified, be very careful, and maybe use a computer that is protected a bit better than your is.  (Avast is very good at telling you if a Web page is trying to send you something malicious, and most of the others do as well.  MSE doesn’t work as well in this regard.)  Possibly use a computer that uses a different operating system: if your computer uses Windows, then use a Mac: if your computer is a Mac, use an Android tablet or something like that.  Usually (though not always) those who set up malware pages are only after one type of computer.

Share

Click on everything?

You clicked on that link, didn’t you?  I’m writing a posting about malicious links in postings and email, and you click on a link in my posting.  How silly is that?

(No, it wouldn’t have been dangerous, in this case.  I disabled the URL by “x”ing out the “tt” in http;” (which is pretty standard practice in malware circles), and further “x”ed out a couple of the letters in the URL.)

Share

The Biggest Gap in Information Security is…?

As a person who’s committed to helping raise awareness in the security community as a whole, I’ve often found myself asking this question. While there are several issues that I think contribute to the state of information security today, I’m going to outline a few of the major ones.

One major problem that spans every industry group from government to finance, all the way over to retail, is the massive amounts of data stored, the large number of devices to manage and frankly, not enough people to do it all. Or not enough people with the appropriate level of security skills to do it. I recently had a student in an Ethical Hacking class who asked me if I would be open to discussing some things in private with him concerning some issues he had at work. During dinner he confided in me that he sees his job as becoming more and more impossible with all the security requirements. He let me know that he had recently completed a penetration test within his company and felt he didn’t really get anything out of it. My first question was how many nodes were in the scope of the test. His response was 20,000. So naturally my next question was how big was his pen test team. To that he looked at me blankly and said “It was just me”. My next question was how long did he have to complete the test. And to that his reply was 3 days. This shocked me greatly and I candidly let this individual know that with a scope that big it will usually take one person more than three days to do proper discovery and recon and wouldn’t even give you time to even start vulnerability discovery, mapping, and exploitation testing/development.  I also informed him that for a job like that I usually deploy 3 people and usually contract a time of 2 to 4 weeks. Keep in mind this young man was a very intelligent and skilled person, but he lacked the skills to pull this off. After more conversation I realized that he himself was responsible for scoping the 3 day time to complete the test.

This brings me to the first main point; I see a trend of corporations and entities placing more security responsibility on individuals without giving them enough resources or training. This person admitted he really didn’t even have the skills to know how long it would take him and he based his time estimate off something he found on the web using google, which was why he was in the class. After the class he emailed me and thanked me for finally giving him the understanding to realize what it would take to successfully complete his internal testing. He drafted a plan for a 4 week test and put in a request to have temporary help for the 4 week duration. 2 months later he sent me another email and a redacted copy of the penetration test (after I signed a NDA of course). I was impressed with his work and let him know that. This demonstrated that even the most intelligent people can become overwhelmed if put into an impossible situation with no tools.

Second is the increasingly swift changing threat models. What would be considered a very secure computer 10 years ago (basic firewall, and up to date anti-virus) would be considered a joke today. I can remember when OS patches were mostly just non-security related bug fixes. If the bug didn’t affect you, you didn’t worry about the patch since it often broke other things. This way of thinking became the norm, and still exists in some places today. Add to that the web based attack vectors and client side attacks, it gets even more detrimental. I watched as Dan Kaminsky wrote himself into the infosec history books with his DNS attack. At the same time I saw one pen test customer after the other totally ignore it. Once we were able to exploit this in their environment we usually got responses like “i thought this mostly affected public/root dns servers”. The bottom line is DNS is DNS, internal or external. While Dans’ demonstration was impressive, thorough and concise, it left the average IT admin lost in the weeds. As humans when we don’t truly understand things we typically either do nothing, or do the wrong things. A lot of the media coverage of this vulnerability mostly focused on the public side threat. So from a surface look, it appeared to be something for “others” to worry about. Within weeks of that presentation there were new mobile device threats identified, new adobe reader threats, and many other common application vulnerabilities were identified. With all these “critical” things identified and disclosed within weeks of each other, it is apparent why some security professionals feel overwhelmed and behind the curve! Throw in the fact that I’m learning from clients and students alike that they’re now expected to be able to perform forensics investigations, and the weeds get deeper.

The last thing I want to point out is a trend I’ve noticed in recent years. The gap between what I like to call the “elite” of the information security world and the average IT admin or average whitehat/security professional is bigger than it’s ever been. Comments I’ve heard is “I went to blackhat and I was impressed with all of what I witnessed, but I don’t truly understand how it works and what to really do about it”. I think part of this is due to the fact that some in the information security community assume their audience should have a certain level of knowledge and refuse to back off that stance.

Overall I think the true gap is in knowledge. Often times individuals are not even sure what knowledge is required to perform their job.  Check back soon as I’ll be sharing some ideas as to how to address this problem.

Keatron Evans, one of the two lead authors of “Chained Exploits: Advanced Hacking Attacks From Start to Finish”, is a Senior Instructor and Training Services Director at the InfoSec Institute.

Share