CyberSec Tips: Email – Spam – Phishing – example 1

Phishing is pretty constant these days.  One of the tips to identify phishing messages is if you don’t have an account at that particular bank.  Unfortunately, a lot of people who are online have accounts with Paypal, so Paypal is becoming a favourite with phishers.  You’ll probably get a message something like this:

Subject: Your account access has been limited
From: service@paypal.co.uk <notice@paypal6.co.uk>

(You might think twice if you have an account with Paypal in the United States, but this domain is in the UK.)

> PayPal is constantly working to ensure security by regularly screening the
>accounts in our system. We recently reviewed your account, and we need more
>information to help us provide you with secure service. Until we can
> collect  this information, your access to sensitive account features will be
> limited. We would like to restore your access as soon as possible, and we
> apologize     for the inconvenience.

>    Why is my account access limited?

>    Your account access has been limited for the following reason(s):

> November 27, 2013: We would like to ensure that your account was not
> accessed by an unauthorized third party. Because protecting the security of
> your account is our primary concern, we have limited access to sensitive
> PayPal account features. We understand that this may be an inconvenience but
> please understand that this temporary limitation is for your protection.

>    Case ID Number: PP-197-849-152

>You must click the link below and enter your password for email on the following page to review your account. hxxp://dponsk.ru/wp-admins/.pay/

> Please visit the hxxp://dponsk.ru/wp-admins/.pay Resolution Center and
> complete the Steps to Remove Limitations.

Sounds official, right?  But notice that the URLs given have nothing to do with Paypal.  Also notice, given the .ru domain, that they are in Russia.  Don’t click on those links.  Neither Paypal of anybody else is going to send you these type of messages these days.

Share

CyberSec Tips: Email – Spam – Fraud – example 2

Another advance fee/419 fraud is the lottery.

> Subject: Dear User
> To: Recipients <info@notizia348.onmicrosoft.com>
> From: Alexander brown <info@notizia348.onmicrosoft.com>

Again, your email address, which supposedly “won” this lottery, is missing: this message is being sent to many people.  (If you really had won millions, don’t you think they’d take a bit more care getting it to you?)

> Dear Internet User,
>  We are pleased to inform you again of the result of the Internet Promotional
>  Draws. All email addresses entered for this promotional draws were randomly
>  inputted from an internet resource database using the Synchronized
> Data Collective Balloting Program.

Sounds impressive.  But it really doesn’t mean anything.  In the first place, you never entered.  And why would anyone set up a lottery based simply on random email sent around the net?  There is no benefit to anyone in that, not even as a promotion.

>  This is our second letter to you. After this automated computer ballot,your
>  email address was selected in Category A with Ref Number: GTL03-2013 and
>  E-Ticket Number: EUB/8974IT,this qualifies you to be the recipient of t
> he grand prize award sum of (US$2,500,000.00) Two Million, Five Hundred Thousand
> United States Dollars.

This is interesting: it presents still more impressive stuff–that really has no meaning.  It starts by saying this is the second message to you, implying that you missed the first.  This is intended to make you anxious, and probably a bit less questioning about things.  Watch out for anything that tries to rush or push you.

The numbers, of course, are meant to sound official, but are meaningless.

>  The payout of this cash prize to you will be subject to the final validations
>  and satisfactory report that you are the bona fide owner of the winning email
>  address. In line with the governing rules of claim, you are requ
> ired to establish contact with your designated claims agent via email or
> telephone with the particulars below:
>  Enquiry Officer: Mr. Samuel Trotti
> Phone: +39 3888146161
> Email: trottioffice@aim.com

Again, note that the person you are to contact is not the one (or even the same domain) as sent the message.

>  You may establish contact with the Enquiry Officer via the e-mail address above
>  with the information’s necessary: Name:, Address:, Phone:, Cell Phone:, Email:,
>  Alternative Email:, Occupation:, Ref Number and E-Ticket Number. All winnings
>  must be claimed within 14 days from today. After this date all unclaimed funds
>  would be included in the next stake. Remember to quote your reference
>  information in all correspondence with your claims agent.

This is interesting: the amount of information they ask from you means that this might not simply be advance fee fraud, but they might be doing phishing and identity theft, as well.

Share

CyberSec Tips: Email – Spam – Fraud – example 1

A lot of the advance fee fraud (also called 419 or Nigerian scams) these days say you’ve been named in a will:

> Subject: WILL EXECUTION!!!
> To: Recipients <clifordchance08@cliffordchance854.onmicrosoft.com>
> From: Clifford Chance <clifordchance08@cliffordchance854.onmicrosoft.com>

Note in this case that the message is sent “to” the person who sent it.  This is often an indication that many people have been sent the same message by being “blind” copied on it.  In any case, it wasn’t sent specifically to you.

> Late Mr.Robert Adler bequeathed US$20,500,000.00 USD, to you in his will.More
> info,contact your attorney(Clifford Chance Esq) via email
> address:clf.chance@hotmail.com  Tell+44-871-974-9198

This message doesn’t tell you very much: sometimes they have a reference to a recent tragic event.

Note also that the email address you are supposed to contact is not the same address that sent the message.  This is always suspicious.  (So is giving a phone number.)

If you look into the headers, there are more oddities:

> From: Clifford Chance <clifordchance08@cliffordchance854.onmicrosoft.com>
> Reply-To: <clf.chance@hotmail.com>
> Message-ID: <XXXX@SINPR02MB153.apcprd02.prod.outlook.com>

There are not only three different email addresses, but three different domains.  Microsoft owns Hotmail, and Hotmail became Outlook, so it’s possible, but it’s still a bit odd.

Share

REVIEW: “Debug It: Find, Repair, and Prevent Bugs in Your Code”, Paul Butcher

BKDEBGIT.RVW   20130122

“Debug It: Find, Repair, and Prevent Bugs in Your Code”, Paul Butcher, 2009, U$34.95/C$43.95, 978-1-93435-628-9
%A   Paul Butcher paul@paulbutcher.com
%C   2831 El Dorado Pkwy, #103-381 Frisco, TX 75033
%D   2009
%G   978-1-93435-628-9 1-93435-628-X
%I   Pragmatic Bookshelf
%O   U$34.95/C$43.95 sales@pragmaticprogrammer.com 800-699-7764
%O  http://www.amazon.com/exec/obidos/ASIN/193435628X/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/193435628X/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/193435628X/robsladesin03-20
%O   Audience n- Tech 2 Writing 1 (see revfaq.htm for explanation)
%P   214 p.
%T   “Debug It: Find, Repair, and Prevent Bugs in Your Code”

The preface states that there are lots of books in the market that teach development and few that teach debugging.  (In my experience, a great many development books include debugging advice, so I’m not sure where the author’s perception comes from.)  The work is structured around a core method for debugging: reproduce, diagnose, fix, and reflect.

Part one presents the basic technique.  Chapter one repeats the description of this core method.  Chapter two encourages the reproduction of the bug.  (This can be more complex than the author lets on.  I have a netbook with some bug in the hibernation function.  Despite constant observation over a period of three and a half years, I’ve yet to find a combination of conditions that reproduces the failure, nor one that prevents it.)  Some of the suggestions given are useful, if pedestrian, while others are pretty pointless.  (Butcher does not address the rather thorny issue of using “real” data for testing.)  In terms of diagnosis, in chapter three, there is limited description of process, but lots of good tips.  The same is true of fixing, in chapter four.  (I most definitely agree with the recommendation to fix underlying causes, rather than effects.)  Reflection, the topic of chapter five, is limited to advice that the problem be considered even after you’ve fixed it.

Part two explores the larger picture.  Chapter six examines bug tracking systems, and eliciting feedback from users and support staff.  Chapter seven advises on trying to address the bugs, but concentrates on “fix early,” with little discussion of priorities or ranking systems.

Part three, entitled “Debug Fu,” turns to related and side issues.  The “Special Cases” in chapter eight seem to be fairly common: software already released, compatibility issues, and “heisenbugs” that disappear when you try to track them.  Chapter nine, on the ideal debugging environment, is about as practical as most such exercises.  “Teach Your Software to Debug Itself” in chapter ten seems confined to a few specific cases.  Chapter eleven notes some common problems in development teams and structures.

The advice in the book is good, and solid, but not surprising to anyone with experience.  Novices who have not considered debugging much will find it useful.

copyright, Robert M. Slade   2013   BKDEBGIT.RVW   20130122

Share

BadBIOS

In recent days there has been much interest in the “BadBIOS” infection being reported by Dragos Ruiu.  (The best overview I’ve seen has been from Naked Security.)  But to someone who has lived through several viral myths and legends, parts of it sound strange.

  • It is said to infect the low-level system firmware of your computer, so it can’t be removed or disabled simply by rebooting.

These things, of course, have been around for a while, so that isn’t necessarily wrong.  However, BIOS infectors never became a major vector.

  • It is said to include components that work at the operating system level, so it affects the high-level operation of your computer, too.
  • It is said to be multi-platform, affecting at least Windows, OS X, and OpenBSD systems.

This sounds bit odd, but we’ve had cross-platform stuff before.  But they never became major problems either.

  • It is said to prevent infected systems being booted from CD drives.

Possible: we’ve seen similar effects over the years, both intentionally and un.

  • It is said to spread itself to new victim computers using Software Defined Radio (SDR) program code, even with all wireless hardware removed.

OK, it’s dangerous to go out on a limb when you haven’t seen details and say something can’t happen, but I’m calling bullshit on this one.  Not that I don’t think someone couldn’t create a communications channel without the hardware: anything the hardware guys can do the software guys can emulate, and vice versa.  However, I can’t see getting an infection channel this way, at least without some kind of minimal infection first.  (It is, of course, possible that the person doing the analysis may have made a mistake in what they observed, or in the reporting of it.)

  • It is said to spread itself to new victim computers using the speakers on an infected device to talk to the microphone on an uninfected one.

As above.

  • It is said to infect simply by plugging in a USB key, with no other action required.

We’ve seen that before.

  • It is said to infect the firmware on USB sticks.

Well, a friend has built a device to blow off dangerous firmware on USB sticks, so I don’t see that this would present any problem.

  • It is said to render USB sticks unusable if they aren’t ejected cleanly; these sticks work properly again if inserted into an infected computer.

Reminds me somewhat of the old “fast infectors” of the early 90s.  They had unintended effects that actually made the infections easy to remove.

  • It is said to use TTF (font) files, apparently in large numbers, as a vector when spreading.

Don’t know details of the internals of TTF files, but they should certainly have enough space.

  • It is said to block access to Russian websites that deal with reflashing software.

Possible, and irrelevant unless we find out what is actually true.

  • It is said to render any hardware used in researching the threat useless for further testing.

Well, anything that gets reflashed is likely to become unreliable and untrustworthy …

  • It is said to have first been seen more than three years ago on a Macbook.

And it’s taken three years to get these details?  Or get a sample to competent researchers?  Or ask for help?  This I find most unbelievable.

In sum, then, I think this might be possible, but I strongly suspect that it is either a promotion for PacSec, or a promo for some presentation on social engineering.

 

Share

Someone always checks up on you

I would like to start by thanking Smit Bharatkumar Shah from http://about.me/smitbshah for bringing to our attention that our site has a potential security vulnerability that could be used by malicious attackers to preform phishing and/or clickjacking attacks. With his help we were able to prevent this attack from occurring. No customers have been affected by this issue.

Our ScanMyServer.com service has been providing security scan reports and vulnerability information for sites from all over the world; but we did however neglect to do one small thing, which is scan our web site with the same service. If we had, ScanMyServer.com would have shown us of the potential issue. How embarrassing is that?!

We have checked our logs for any sign that the vulnerability has been exploited or our customers have been misused but nothing came out. Due to the nature of this issue, any attack would have been recorded in the logs.

The solution for the above mentioned vulnerability is a simple two step fix:
1) Run:
a2enmod headers

2) Add to /etc/apache2/conf.d/security the following line:
Header always append X-Frame-Options SAMEORIGIN

If any of you finds any other issues in our site, please contact us at support@beyondsecurity.com and we will be happy to credit you with the find. Thanks for making our service better!

Share

It’s What’s on the Inside that Counts

The last time I checked, the majority of networking and security professionals were still human.

We all know that the problem with humans is that they sometimes exhibit certain behaviors that can lead to trouble – if that wasn’t the case we’d probably all be out of a job! One such behavior is obsession.

Obsession can be defined as an idea or thought that continually preoccupies or intrudes on a person’s mind. I’ve worked with a number of clients who have had an obsession that may, as bizarrely as it seems, have had a negative impact on their information security program.

The obsession I speak of is the thought of someone “breaking in” to their network from the outside.

You’re probably thinking to yourself, how on earth can being obsessed with protecting your network from external threats have a negative impact on your security? If anything it’s probably the only reason you’d want a penetration test in the first place! I’ll admit, you’re correct about that, but allow me to explain.

Every organization has a finite security budget. How they use that budget is up to them, and this is where the aforementioned obsession can play its part. If I’m a network administrator with a limited security budget and all I think about is keeping people out of my network, my shopping list will likely consist of edge firewalls, web-application firewalls, IDS/IPS and a sprinkling of penetration testing.

If I’m a pen tester working on behalf of that network administrator I’ll scan the network and see a limited number of open ports thanks to the firewall, trigger the IPS, have my SQL injection attempts dropped by the WAF and generally won’t be able to get very far. Then my time will be up, I’ll write a nice report about how secure the network is and move on. Six or twelve months later, I’ll do exactly the same test, find exactly the same things and move on again. This is the problem. It might not sound like a problem, but trust me, it is. Once we’ve gotten to this point, we’ve lost sight of the reason for doing the pen test in the first place.

The test is designed to be a simulation of an attack conducted by a malicious hacker with eyes only for the client. If a hacker is unable to break into the network from the outside, chances are they won’t wait around for a few months and try exactly the same approach all over again. Malicious hackers are some of the most creative people on the planet. If we really want to do as they do, we need to give our testing a creativity injection. It’s our responsibility as security professionals to do this, and encourage our clients to let us do it.

Here’s the thing, because both pen testers and clients have obsessed over how hackers breaking into stuff for so long, we’ve actually gotten a lot better at stopping them from doing so. That’s not to say that there will never be a stray firewall rule that gives away a little too much skin, or a hastily written piece of code that doesn’t validate input properly, but generally speaking “breaking in” is no longer the path of least resistance at many organizations – and malicious hackers know it. Instead “breaking out” of a network is the new route of choice.

While everyone has been busy fortifying defenses on the way in to the network, traffic on the way out is seldom subject to such scrutiny – making it a very attractive proposition to an attacker. Of course, the attacker still has to get themselves into position behind the firewall to exploit this – but how? And how can we simulate it in a penetration test?

What the Pen Tester sees

The Whole Picture

On-Site Testing

There is no surer way of getting on the other side of the firewall than to head to your clients office and plugging directly into their network. This isn’t a new idea by any means, but it’s something that’s regularly overlooked in favor of external or remote testing. The main reason for this of course is the cost. Putting up a tester for a few nights in a hotel and paying travel expenses can put additional strain on the security budget. However, doing so is a hugely valuable exercise for the client. I’ve tested networks from the outside that have shown little room for enumeration, let alone exploitation. But once I headed on-site and came at those networks from a different angle, the angle no one ever thinks of, I had trouble believing they were the same entity.

To give an example, I recall doing an on-site test for a client who had just passed an external test with flying colors. Originally they had only wanted the external test, which was conducted against a handful of IPs. I managed to convince them that in their case, the internal test would provide additional value. I arrived at the office about an hour and a half early, I sat out in the parking lot waiting to go in. I fired up my laptop and noticed a wireless network secured with WEP, the SSID was also the name of the client. You can probably guess what happened next. Four minutes later I had access to the network, and was able to compromise a domain controller via a flaw in some installed backup software. All of this without leaving the car. Eventually, my point of contact arrived and said, “So are you ready to begin, or do you need me to answer some questions first?” The look on his face when I told him that I’d actually already finished was one that I’ll never forget. Just think, had I only performed the external test, I would have been denied that pleasure. Oh, and of course I would have never picked up on the very unsecure wireless network, which is kind of important too.

This is just one example of the kind of thing an internal test can uncover that wouldn’t have even been considered during an external test. Why would an attacker spend several hours scanning a network range when they could just park outside and connect straight to the network?

One of my favorite on-site activities is pretending I’m someone with employee level access gone rogue. Get on the client’s standard build machine with regular user privileges and see how far you can get on the network. Can you install software? Can you load a virtual machine? Can you get straight to the internet, rather than being routed through a proxy? If you can, there are a million and one attack opportunities at your fingertips.

The majority of clients I’ve performed this type of test for hugely overestimated their internal security. It’s well documented that the greatest threat comes from the inside, either on purpose or by accident. But of course, everyone is too busy concentrating on the outside to worry about what’s happening right in front of them.

Good – Networks should be just as hard to break out of, as they are to break in to.

Fortunately, some clients are required to have this type of testing, especially those in government circles. In addition, several IT security auditing standards require a review of internal networks. The depth of these reviews is sometimes questionable though. Auditors aren’t always technical people, and often the review will be conducted against diagrams and documents of how the system is supposed to work, rather than how it actually works. These are certainly useful exercises, but at the end of the day a certificate with a pretty logo hanging from your office wall won’t save you when bad things happen.

Remote Workers

Having a remote workforce can be a wonderful thing. You can save a bunch of money by not having to maintain a giant office and the associated IT infrastructure. The downside of this is that in many organizations, the priority is getting people connected and working, rather than properly enforcing security policy. The fact is that if you allow someone to connect remotely into the heart of your network with a machine that you do not have total control over, your network is about as secure as the internet. You are in effect extending your internal network out past the firewall to the unknown. I’ve seen both sides of the spectrum, from an organization that would only allow people to connect in using routers and machines that they configured and installed, to an organization that provided a link to VPN client and said “get on with it”.

I worked with one such client who was starting to rely on remote workers more and more, and had recognized that this could introduce a security problem. They arranged for me to visit the homes of a handful of employees and see if I could somehow gain access to the network’s internal resources. The first employee I visited used his own desktop PC to connect to the network. He had been issued a company laptop, but preferred the big screen, keyboard and mouse that were afforded to him by his desktop. The machine had no antivirus software installed, no client firewall running and no disk encryption. This was apparently because all of these things slowed it down too much. Oh, but it did have a peer-to-peer file sharing application installed. No prizes for spotting the security risks here.

In the second home I visited, I was pleased to see the employee using her company issued XP laptop. Unfortunately she was using it on her unsecured wireless network. To demonstrate why this was a problem, I joined my testing laptop to the network, fired up a Metasploit session and hit the IP with my old favorite, the MS08-067 NetAPI32.dll exploit module. Sure enough, I got a shell, and was able to pivot my way into the remote corporate network. It was at this point that I discovered the VPN terminated in a subnet with unrestricted access to the internal server subnet. When I pointed out to the client that there really should be some sort of segregation between these two areas, I was told that there was. “We use VLAN’s for segregation”, came the response. I’m sure that everyone reading this will know that segregation using VLAN’s, at least from a security point of view, is about as useful as segregating a lion from a Chihuahua with a piece of rice paper. Ineffective, unreliable and will result in an unhappy ending.

Bad – The VPN appliance is located in the core of the network.

Social Engineering

We all know that this particular activity is increasing in popularity amongst our adversaries, so why don’t we do it more often as part of our testing? Well, simply put, a lot of the time this comes down to politics. Social engineering tests are a bit of a touchy subject at some organizations, who fear a legal backlash if they do anything to blatantly demonstrate how their own people are subject to the same flaws as the seven billion other on the planet. I’ve been in scoping meetings when as soon as the subject of social engineering has come up, I’m stared at harshly and told in no uncertain terms, “Oh, no way, that’s not what we want, don’t do that.” But why not do it? Don’t you think a malicious hacker would? You’re having a pen test right? Do you think a malicious hacker would hold off on social engineering because they haven’t gotten your permission to try it? Give me a break.

On the other hand, I’ve worked for clients who have recognized the threat of social engineering as one of the greatest to their security, and relished at the opportunity to have their employees tested. Frequently, these tests result in a greater than 80% success rate. So how are they done?

Well, they usually start off with the tester registering a domain name which is extremely similar to the client’s. Maybe with one character different, or a different TLD (“.net” instead of “.com” for example).

The tester’s next step would be to set up a website that heavily borrows CSS code from the client’s site. All it needs is a basic form with username and password fields, as well as some server side coding to email the contents of the form to the tester upon submission.

With messages like this one in an online meeting product, it’s no wonder social engineering attacks are so successful.

Finally, the tester will send out an email with some half-baked story about a new system being installed, or special offers for the employee “if you click this link and login”. Sit back and wait for the responses to come in. Follow these basic steps and within a few minutes, you’ve got a username, password and employee level access. Now all you have to do is find a way to use that to break out of the network, which won’t be too difficult, because everyone will be looking the other way.

Conclusion

The best penetration testers out there are those who provide the best value to the client. This doesn’t necessarily mean the cheapest or quickest. Instead it’s those who make the most effective use of their relatively short window of time, and any other limitations they face to do the job right. Never forget what that job is, and why you are doing it. Sometimes we have to put our generic testing methodologies aside and deliver a truly bespoke product. After all, there is nothing more bespoke than a targeted hacking attack, which can come from any direction. Even from the inside.

Share

Risk management and security theatre

Bruce Schneier is often outrageous, these days, but generally worth reading.  In a piece for Forbes in late August, he made the point that, due to fear and the extra trouble casued by TSA regulations, more people were driving rather than flying, and, thus, more people were dying.

“The inconvenience of extra passenger screening and added costs at airports after 9/11 cause many short-haul passengers to drive to their destination instead, and, since airline travel is far safer than car travel, this has led to an increase of 500 U.S. traffic fatalities per year.”

So, by six years after the event, the TSA had killed more US citizens than had the terrorists.  And continues to kill them.

Given the recent NSA revelations, I suppose this will sound like more US-bashing, but I don’t see it that way.  It’s another example of the importance of *real* risk management, taking all factors into account.

Share

“Poor” decisions in management?

I started reading this article just for the social significance.  You’ve probably seen reports of it: it’s been much in the media.

However, I wasn’t very far in before I came across a statement that seems to have a direct implication to all business management, and, in particular, the CISSP:

“The authors gathered evidence … and found that just contemplating a projected financial decision impacted performance on … reasoning tests.”

As soon as I read that, I flashed on the huge stress we place on cost/benefit analysis in the CISSP exam.  And, of course, that extends to all business decisions: everything is based on “the bottom line.”  Which would seem to imply that hugely important corporate and public policy decisions are made on the worst possible basis and in the worst possible situation.

(That *would* explain a lot about modern business, policy, and economics.  And maybe the recent insanity in the US Congress.)

Other results seem to temper that statement, and, unfortunately, seem to support wage inequality and the practice of paying obscene wages to CEOs and directors: “… low-income people asked to ponder an expensive car repair did worse on cognitive-function tests than low-income people asked to consider cheaper repairs or than higher-income people faced with either scenario.”

But it does make you think …

Share

Google’s “Shared Endorsements”

A lot of people are concerned about Google’s new “Shared Endorsements” scheme.

However, one should give credit where credit is due.  This is not one of Facebook’s functions, where, regardless of what you’ve set or unset in the past, every time they add a new feature it defaults to “wide open.”  If you have been careful with your Google account in the past, you will probably find yourself still protected.  I’m pretty paranoid, but when I checked the Shared Endorsements setting page on my accounts, and the “Based upon my activity, Google may show my name and profile photo in shared endorsements that appear in ads” box is unchecked on all of them.  I can only assume that it is because I’ve been circumspect in my settings in the past.

Share

“Identity Theft” of time

I really should know better.

Last night, hoping that, in two hours, Hollywood might provide *some* information on an important topic, even if limited, I watched “Identity Thief,” a movie put out by Universal in 2013, starring Jason Bateman and Melissa McCarthy.

It is important to point out to people that, if someone phones you up and offers you a free service to protect you from identity theft, it is probably not a good idea to give them your name, date of birth, social security/insurance number, credit card and bank account numbers, and basically everything else about you.  This tip is provided in the first thirty seconds of the film.  After that (except for the point that the help law enforcement might be able to give you is limited) it’s all downhill.  The plot is ridiculous (even for a comedy), the characters somewhat uneven, the situations crude, the relationship unlikely, the language profane, and the legalities extremely questionable.

(The best line in the entire movie is: Sandy – “Do you know what a sociopath is?” Diane – “Do they like ribs?”  I know this may not seem funny, but trust me: it gives you a very good idea of how humorous this movie really is.)

Share

The Common Vulnerability Scoring System

Introduction

This article presents the Common Vulnerability Scoring System (CVSS) Version 2.0, an open framework for scoring IT vulnerabilities. It introduces metric groups, describes base metrics, vector, and scoring. Finally, an example is provided to understand how it works in practice. For a more in depth look into scoring vulnerabilities, check out the ethical hacking course offered by the InfoSec Institute.

Metric groups

There are three metric groups:

I. Base (used to describe the fundamental information about the vulnerability—its exploitability and impact).
II. Temporal (time is taken into account when severity of the vulnerability is assessed; for example, the severity decreases when the official patch is available).
III. Environmental (environmental issues are taken into account when severity of the vulnerability is assessed; for example, the more systems affected by the vulnerability, the higher severity).

This article is focused on base metrics. Please read A Complete Guide to the Common Vulnerability Scoring System Version 2.0 if you are interested in temporal and environmental metrics.

Base metrics

There are exploitability and impact metrics:

I. Exploitability

a) Access Vector (AV) describes how the vulnerability is exploited:
- Local (L)—exploited only locally
- Adjacent Network (A)—adjacent network access is required to exploit the vulnerability
- Network (N)—remotely exploitable

The more remote the attack, the more severe the vulnerability.

b) Access Complexity (AC) describes how complex the attack is:
- High (H)—a series of steps needed to exploit the vulnerability
- Medium (M)—neither complicated nor easily exploitable
- Low (L)—easily exploitable

The lower the access complexity, the more severe the vulnerability.

c) Authentication (Au) describes the authentication needed to exploit the vulnerability:
- Multiple (M)—the attacker needs to authenticate at least two times
- Single (S)—one-time authentication
- None (N)—no authentication

The lower the number of authentication instances, the more severe the vulnerability.

II. Impact

a) Confidentiality (C) describes the impact of the vulnerability on the confidentiality of the system:
- None (N)—no impact
- Partial (P)—data can be partially read
- Complete (C)—all data can be read

The more affected the confidentiality of the system is, the more severe the vulnerability.

+b) Integrity (I) describes an impact of the vulnerability on integrity of the system:
- None (N)—no impact
- Partial (P)—data can be partially modified
- Complete (C)—all data can be modified

The more affected the integrity of the system is, the more severe the vulnerability.

c) Availability (A) describes an impact of the vulnerability on availability of the system:
- None (N)—no impact
- Partial (P)—interruptions in system’s availability or reduced performance
- Complete (C)—system is completely unavailable

The more affected availability of the system is, the more severe the vulnerability.

Please note the abbreviated metric names and values in parentheses. They are used in base vector description of the vulnerability (explained in the next section).

Base vector

Let’s discuss the base vector. It is presented in the following form:

AV:[L,A,N]/AC:[H,M,L]/Au:[M,S,N]/C:[N,P,C]/I:[N,P,C]/A:[N,P,C]

This is an abbreviated description of the vulnerability that brings information about its base metrics together with metric values. The brackets include possible metric values for given base metrics. The evaluator chooses one metric value for every base metric.

Scoring

The formulas for base score, exploitability, and impact subscores are given in A complete Guide to the Common Vulnerability Scoring System Version 2.0 [1]. However, there in no need to do the calculations manually. There is a Common Vulnerability Scoring System Version 2 Calculator available. The only thing the evaluator has to do is assign metric values to metric names.

Severity level

The base score is dependent on exploitability and impact subscores; it ranges from 0 to 10, where 10 means the highest severity. However, CVSS v2 doesn’t transform the score into a severity level. One can use, for example, the FortiGuard severity level to obtain this information:

FortiGuard severity level CVSS v2 score
Critical 9 – 10
High 7 – 8.9
Medium 4 – 6.9
Low 0.1 – 3.9
Info 0

Putting the pieces together

An exemplary vulnerability in web application is provided to better understand how Common Vulnerability Scoring System Version 2.0 works in practice. Please keep in mind that this framework is not limited to web application vulnerabilities.

Cross-site request forgery in admin panel allows adding a new user and deleting an existing user or all users.

Let’s analyze first the base metrics together with the resulting base vector:

Access Vector (AV): Network (N)
Access Complexity (AC): Medium (M)
Authentication (Au): None (N)

Confidentiality (C): None (N)
Integrity (I): Partial (P)
Availability (A): Complete (C)

Base vector: (AV:N/AC:M/Au:N/C:N/I:P/A:C)

Explanation: The admin has to visit the attacker’s website for the vulnerability to be exploited. That’s why the access complexity is medium. The website of the attacker is somewhere on the Internet. Thus the access vector is network. No authentication is required to exploit this vulnerability (the admin only has to visit the attacker’s website). The attacker can delete all users, making the system unavailable for them. That’s why the impact of the vulnerability on the system’s availability is complete. Deleting all users doesn’t delete all data in the system. Thus the impact on integrity is partial. Finally, there is no impact on the confidentiality of the system provided that added user doesn’t have read permissions on default.

Let’s use the Common Vulnerability Scoring System Version 2 Calculator to obtain the subscores (exploitability and impact) and base score:

Exploitability subscore: 8.6
Impact subscore: 7.8
Base score: 7.8

Let’s transform the score into a severity level according to FortiGuard severity levels:

FortiGuard severity level: High

Summary

This article described an open framework for scoring IT vulnerabilities—Common Vulnerability Scoring System (CVSS) Version 2.0. Base metrics, vector and scoring were presented. An exemplary way of transforming CVSS v2 scores into severity levels was described (FortiGuard severity levels). Finally, an example was discussed to see how all these pieces work in practice.

Dawid Czagan is a security researcher for the InfoSec Institute and the Head of Security Consulting at Future Processing.

Share

Bank of Montreal online banking insecurity

I’ve had an account with the Bank of Montreal for almost 50 years.

I’m thinking that I may have to give it up.

BMO’s online banking is horrendously insecure.  The password is restricted to six characters.  It is tied to telephone banking, which means that the password is actually the telephone pad numeric equivalent of your password.  You can use that numeric equivalent or any password you like that fits the same numeric equivalent.  (Case is, of course, completely irrelevant.)

My online access to the accounts has suddenly stopped working.  At various times, over the years, I have had problems with the access and had to go to the bank to find out why.  The reasons have always been weird, and the process of getting access again convoluted.  At present I am using, for access, the number of a bank debit card that I never use as a debit card.  (Or even an ATM card.)  The card remains in the file with the printed account statements.

Today when I called about the latest problem, I had to run through the usual series of inane questions.  Yes, I knew how long my password had to be.  Yes, I knew my password.  Yes, it was working until recently.  No, it didn’t work on online banking.  No, it didn’t work on telephone banking.

The agent (no, sorry, “service manager,” these days) was careful to point out that he was *not* going to ask me for my password.  Then he set up a conference call with the online banking system, and had me key in my password over the phone.

(OK, it’s unlikely that even a trained musician could catch all six digits from the DTMF tones on one try.  But a machine could do it easily.)

After all that, the apparent reason for the online banking not working is that the government has mandated that all bank cards now be chipped.  So, without informing me, and without sending me a new card, the bank has cancelled my access.  ( I suppose that is secure.  If you are not counting on availability, or access to audit information.)

(I also wonder, if that was the reason, why the “service manager” couldn’t just look up the card number and determine that the access had been cancelled, rather than having me try to sign in.)

I’ll probably go and close my account this afternoon.

Share

YASCCL (Yet Another Stupid Computer Crime Law)

Over the years I have seen numerous attempts at addressing the serious problems in computer crime with new laws.  Well-intentioned, I know, but all too many of these attempts are flawed.  The latest is from Nova Scotia:

Bill 61
Commentary

“The definition of cyberbullying, in this particular bill, includes “any electronic communication” that ”ought reasonably be expected” to “humiliate” another person, or harm their “emotional well-being, self-esteem or reputation.””

Well, all I can say is that everyone in this forum better be really careful what they say about anybody else.

(Oh, $#!+.  Did I just impugn the reputation of the Nova Scotia legislature?)

Share

Outsourcing, and rebranding, (national) security

I was thinking about the recent trend, in the US, for “outsourcing” and “privatization” of security functions, in order to reduce (government) costs.  For example, we know, from the Snowden debacle, that material he, ummm, “obtained,” was accessed while he was working for a contractor that was working for the NSA.  The debacle also figured in my thinking, particularly the PR fall-out and disaster.

Considering both these trends; outsourcing and PR, I see an opportunity here.  The government needs to reduce costs (or increase revenue).  At the same time, there needs to be a rebranding effort, in order to restore tarnished images.

Sports teams looking for revenue (or cost offsets) have been allowing corporate sponsors to rename, or “rebrand,” arenas.  Why not allow corporations to sponsor national security programs, and rebrand them?

For example: PRISM has become a catch-phrase for all that is wrong with surveillance of the general public.  Why not allow someone like, say, DeBeers to step in.  For a price (which would offset the millions being paid to various tech companies for “compliance”) it could be rebranded as DIAMOND, possibly with a new slogan like “A database is forever!”

(DeBeers is an obvious sponsor, given the activities of NSA personnel in regard to love interests.)

I think the possibilities are endless, and should be explored.

Share

Hardening guide for Postfix 2.x

  1. Make sure the Postfix is running with non-root account:
    ps aux | grep postfix | grep -v '^root'
  2. Change permissions and ownership on the destinations below:
    chmod 755 /etc/postfix
    chmod 644 /etc/postfix/*.cf
    chmod 755 /etc/postfix/postfix-script*
    chmod 755 /var/spool/postfix
    chown root:root /var/log/mail*
    chmod 600 /var/log/mail*
  3. Edit using VI, the file /etc/postfix/main.cf and add make the following changes:
    • Modify the myhostname value to correspond to the external fully qualified domain name (FQDN) of the Postfix server, for example:
      myhostname = myserver.example.com
    • Configure network interface addresses that the Postfix service should listen on, for example:
      inet_interfaces = 192.168.1.1
    • Configure Trusted Networks, for example:
      mynetworks = 10.0.0.0/16, 192.168.1.0/24, 127.0.0.1
    • Configure the SMTP server to masquerade outgoing emails as coming from your DNS domain, for example:
      myorigin = example.com
    • Configure the SMTP domain destination, for example:
      mydomain = example.com
    • Configure to which SMTP domains to relay messages to, for example:
      relay_domains = example.com
    • Configure SMTP Greeting Banner:
      smtpd_banner = $myhostname
    • Limit Denial of Service Attacks:
      default_process_limit = 100
      smtpd_client_connection_count_limit = 10
      smtpd_client_connection_rate_limit = 30
      queue_minfree = 20971520
      header_size_limit = 51200
      message_size_limit = 10485760
      smtpd_recipient_limit = 100
  4. Restart the Postfix daemon:
    service postfix restart

The article can also be found at: http://security-24-7.com/hardening-guide-for-postfix-2-x

Share