Best Email Retention Policy Practices

Email retention policies are no longer just about conserving space on your Exchange server. Today you must take into account how your email retention controls increase or decrease risk to your company.

Pros and Cons of Short and Long Email Retention Policies

Generally speaking, longer email retention policies increase the risk that a security vulnerability or unauthorized user could expose your company’s secrets or embarrassing material. Long policies also increase your company’s exposure to legal examination that focuses on conversations and decisions captured in emails (this is also known as the “paper trail” in an “eDiscovery” process).

Shorter email retention policies help avoid these problems and are cheaper to implement, but they have their own significant disadvantages as well. First, short policies tend to annoy long-term employees and often executives, who rely on old email chains to recollect past decisions and the context in which they were made. Second, short policies may violate federal, state, local and/or industry regulations that require certain types of information to be retained for a minimum period of time – often years!

Best Practices to Develop Your Email Retention Policy

Obviously, you must balance these factors and others when you develop your own email retention policy, but there are a number of best practices that can help you draft and get support for a solid email retention policy. Today, I’ll be covering five practices often used by effective professionals and managers.

Email Retention Policy Best Practice #1: Start With Regulatory Minimums

Your email retention policy should begin by listing the various regulations your company is subject to and the relevant document retention requirements involved with each regulation.

Every industry is regulated differently, and businesses are often subject to different tax, liability and privacy regulations depending on the locations in which they do business. However, some common recommended retention periods include:

If a retention period is not known for a particular type of data, seven years (the minimum IRS recommendation) is often used as a safe common denominator.

Email Retention Policy Best Practice #2: Segment As Necessary To Avoid Keeping Everything For the Legal Maximum

As you can see from the list above, recommended retention periods vary widely even within highly regulated industries. With that in mind, it often pays to segment different types or uses of email into different retention periods to avoid subjecting your entire online email store to the maximum email retention period.

Segmentation by type of content looks something like this:

  • Invoices – 7 years
  • Sales Records – 5 years
  • Petty Cash Vouchers – 3 years

Segmentation by type of use looks something like this:

  • Administrative correspondence (e.g., human resources) – 5 years
  • Fiscal correspondence (e.g., revenue and expenses) – 4 years
  • General correspondence (e.g., customer interactions, internal threads) – 3 years
  • Ephemeral correspondence (e.g., everything else business-related) – 1 year
  • Spam – not retained

Mixed segmentation is also often common and looks something like this:

  • Human resources – 7 years
  • Transaction receipts – 3 years
  • Executive email – 2 years
  • Spam – not retained
  • Everything else (e.g., “default retention policy”) – 1 year

The rules and technologies you use to inspect, classify and segment can vary from simple sender- and subject-matching to sophisticated engines that intuit intent and history. (Unfortunately, space does not permit us to examine these technologies here, but trust me – they exist!)

Email Retention Policy Best Practice #3:

Draft a Real Policy…But Don’t Include What You Won’t Enforce

A written policy, approved by legal counsel and senior management, will give you the requirements and authority to implement all the IT, security and process controls you need. If you haven’t seen a full retention policy yet, please take the time to search the web for a few, such as this template from the University of Wisconsin(Go Badgers! Sorry…proud alum.)

Note that many “email retention policy” documents (including the UW template) cover much more than email! In general, this is OK because a “document policy” gives you what you need to implement an “email policy”, but you’ll want to make a point of talking the “document vs. email” terminology through with your legal team before you finalize your policy.

A good written policy (again, including the UW template) always contains these sections:

  • Purpose: why does this policy exist? If specific regulations informed the creation of this policy, they should all be listed here.
  • Retention time, by segment: how long various types of content or content used in a particular manner must be retained (the UW template segments by type of content). Durations are often listed in years, may include triggers (e.g., “after X”) and may even be “Permanent”.
  • Differences between “paper” and “electronic” documents: ideally, none.
  • What constitutes “destruction”: usually shredding and deleting, often “secure deletion” (e.g., with overwriting) and degaussing of media where applicable.
  • Pause destruction if legal action imminent: your legal department will normally add this for you, but you can show off your legal bona fides by including a clause instructing IT to pause automatic email deletion if the company becomes the subject of a claim or lawsuit (this is also called a “litigation hold”).
  • Who is responsible: typically everyone who touches the documents, often with special roles for certain titles (e.g., “Chief Archivist”) or groups (e.g., “legal counsel”).

Good written policies omit areas that you won’t or can’t support, especially types of segmentation you will not be able to determine or support. Good policies also refer to capabilities and requirements (e.g., offsite archival) rather than specific technologies and processes (e.g., DAT with daily courier shipments).

Email Retention Policy Best Practice #4: Price Preferred Solution and Alternatives By Duration and Segment

Let’s pretend that you have a policy like the following:

  • All email: retain on fast storage for 18 months
  • Purchase transaction emails : also archive to offline storage until 5 years have passed
  • Legal emails: also archive to offline storage until 7 years have passed
  • “Fast storage” = accessible through end user’s email clients through “folders”; normally only individual users can access, but administrators and archival specialists (e.g., the legal team) can access too
  • “Offline storage” = accessible through internal utility and search; only administrators and archival specialists (e.g., the legal team) can access

To price an appropriate solution, you would restate your requirements based on number of users, expected volume of email and expected rate of growth. For example, in a 500-person company where each user averaged 1MB and 100 messages of email a day, there were 5000 additional transaction emails (total 50MB) a day and 100 additional legal emails (total 20MB) a day, and volumes were expected to increase 10% per year, here’s how we might estimate minimum requirements for the next seven years:

  • All email: 18 months x 1MB/day-person x 30 days/month x 500 people = 270GB x 1.8 (about 10% increase in 6 years) = 486GB email server storage
  • Purchase transaction emails: 5 years x 12 months/year x 30 days/month x 50MB/day = 90GB x 1.8 = 162GB email archive storage
  • Legal emails: 7 years x 12 months/year x 30 days/month x 20MB/day = 50GB x 1.8 = 91GB email archive storage
  • TOTAL: 486GB server + 253GB archive

However, after you’ve priced out your preferred solution, you still need to be prepared to handle alternatives that may result from discussions with legal or your executive team. For example, if the executive team pushes your 18 month blanket retention to 3 years and the legal team “requires” that its emails are always in near-term email storage, how would that change your requirements and pricing?

  • All email: 36 months x 1MB/day-person x 30 days/month x 500 people = 540GB x 1.8 (about 10% increase in 6 years) = 972GB email server storage
  • Purchase transaction emails: 5 years x 12 months/year x 30 days/month x 50MB/day = 90GB x 1.8 = 162GB email archive storage
  • Legal emails: 7 years x 12 months/year x 30 days/month x 20MB/day = 50GB x 1.8 = 91GB email server storage
  • TOTAL: 1063GB server + 192GB archive (e.g., DOUBLE your realtime storage!)

Long story short, if you can figure out your own rule-of-thumb per-GB price for the various types of storage necessary to support your archiving scheme (as well as licensing considerations, including any per-message or per-type-of-message rules) you’ll be better prepared for “horse trading” later in the approval process.

Email Retention Policy Best Practice #5: Once You Draft Your Policy, Include Legal Before the Executives

If you’re still reading this, chances are good that you (like me) are a senior IT or security professional, or are perhaps even a manager. If you’ve drafted other IT policies, such as an “acceptable use” policy, your first instinct might be to keep your legal team out of the process until your new policy has snowballed down from your IT-based executive sponsor. This is almost always a mistake.

The main reason legal should be included as soon as you have a draft is that two of the best practices listed above (regulatory minimums and viability of segmentation) are really legal’s call – not yours! You will have saved legal a lot of legwork by researching the main drivers of email retention policy and the technical controls you can use to enforce the policy, but at the end of the day legal will be called upon to defend the company’s decision to keep or toss critical information, so legal will need to assign the final values to your policy limits.

A second reason to include legal before your executives is that you want to present a unified front (as IT and legal) on your maximum retention limits. Once you get into negotiations with your executive team, legal will likely be pushing for even shorter limits (because it limits the threat of hostile eDiscovery) and the executives will be pushing for even longer limits (because email is their old document storage). This puts you (as IT) in the rational middle and gives your policy a good chance of making it through the negotiations relatively unscathed.

The final reason you want to include legal early is that their calls may force you to reprice the options you laid out before you talked to them, and may cause you to take some options off the table. If you reversed the process and got executives to sign off on a solution that got vetoed by legal and sent back to the executive team for a second round of “ask,” I think you know that no one would be happy.

Conclusion: Your Email Retention Policy Will Be Your Own

Given all the different constraints your organization faces and all the different ways your interactions with your legal and executive team could go, it would be impossible for me to predict what any company’s email retention policy would be. However, if you follow these five best practices when you develop your own, you stand a better-than-average chance of drafting an email retention policy that’s sensible, enforceable, and loved by legal and top management alike.

Share

Crafting a Pen Testing Report

You close the lid of your laptop; it’s been a productive couple of days. There are a few things that could be tightened up, but overall the place isn’t doing a bad job. Exchange pleasantries with the people who have begrudgingly given up time to escort you, hand in your visitors badge and head for the door. Just as you feel the chill of outside against your skin, you hear a muffed voice in the background.

“Hey, sorry, I forgot to ask, when can we expect the report?”

Sound familiar?

Ugh, the report. Penetration testing’s least favorite cousin, but ultimately, one of the most important.

There are thousands of books written about information security and pen testing. There are hundreds of hours of training courses that cover the penetration testing process. However, I would happily wager that less than ten percent of all the material out there is dedicated to reporting. This, when you consider that you probably spend 40-50% of the total duration of a pen test engagement actually writing the report, is quite alarming.

It’s not surprising though, teaching someone how to write a report just isn’t as sexy as describing how to craft the perfect buffer overflow, or pivot round a network using Metasploit. I totally get that, even learning how the TCP packet structure works for the nineteenth time sounds like a more interesting topic.

A common occurrence amongst many pen testers. Not allowing enough time to produce a decent report.

No matter how technically able we are as security testers, it is often a challenge to explain a deeply technical issue to someone who may not have the same level of technical skill. We are often guilty of making assumptions that everyone who works in IT has read the same books, or has the same interests as us. Learning to explain pen test findings in a clear and concise way is an art form, and one that every security professional should take the time to master. The benefits of doing so are great. You’ll develop a better relationship with your clients, who will want to make use of your services over and over again. You’ll also save time and money, trust me. I once drove a 350 mile round trip to go and explain the contents of a penetration test report to a client. I turned up, read some pages of the report aloud with added explanations and then left fifteen minutes later. Had I taken a tiny bit more time clarifying certain issues in my report, I would have saved an entire day of my time and a whole tank of gas.

Diluted: “SSH version one should be disabled as it contains high severity vulnerabilities that may allow an attacker already on the network to intercept and decrypt communications, although the risk of an attacker gaining access to the network is very low, so this reduces the severity.”

Clarified: “It is advisable to disable SSH version one on these devices, failure to do so could allow an attacker with local network access to decrypt and intercept communications.”

Why is a penetration test report so important?

Never forget, penetration testing is a scientific process, and like all scientific processes it should be repeatable by an independent party. If a client disagrees with the findings of a test, they have every right to ask for a second opinion from another tester. If your report doesn’t detail how you arrived at a conclusion, the second tester will have no idea how to repeat the steps you took to get there. This could lead to them offering a different conclusion, making you look a bit silly and worse still, leaving a potential vulnerability exposed to the world.

Bad: “Using a port scanner I detected an open TCP port”.

    Better: “Using Nmap 5.50, a port scanner, I detected an open TCP port using the SYN scanning technique on a selected range of ports. The command line was: nmap –sS –p 7000-8000.”

The report is the tangible output of the testing process, and the only real evidence that a test actually took place. Chances are, senior management (who likely approved funding for the test) weren’t around when the testers came into the office, and even if they were, they probably didn’t pay a great deal of attention. So to them, the report is the only thing they have to go on when justifying the expense of the test. Having a penetration test performed isn’t like any other type of contract work. Once the contract is done there is no new system implemented, or no new pieces of code added to an application. Without the report, it’s very hard to explain to someone what exactly they’ve just paid for.

Who is the report for?

While the exact audience of the report will vary depending on the organization, it’s safe to assume that it will be viewed by at least three types of people.

Senior management, IT management and IT technical staff will all likely see the report, or at least part of it. All of these groups will want to get different snippets of information. Senior management simply doesn’t care, or doesn’t understand what it means if a payment server encrypts connections using SSL version two. All they want to know is the answer to one simple question “are we secure – yay or nay?”

IT management will be interested in the overall security of the organization, but will also want to make sure that their particular departments are not the cause of any major issues discovered during testing. I recall giving one particularly damming report to three IT managers. Upon reading it two of them turned very pale, while the third smiled and said “great, no database security issues then”.

IT staff will be the people responsible for fixing any issues found during testing. They will want to know three things. The name of the system affected, how serious the vulnerability is and how to fix it. They will also want this information presented to them in a way that is clear and organized. I find the best way is to group this information by asset and severity. So for example, “Server A” is vulnerable to “Vulnerability X, Y and Z. Vulnerability Y is the most critical”. This gives IT staff half a chance of working through the list of issues in a reasonable timeframe. There is nothing worse than having to work your way backwards and forwards through pages of report output to try and keep track of vulnerabilities and whether or not they’ve been looked at.

Of course, you could always ask your client how they would like vulnerabilities grouped. After all, the test is really for their benefit and they are the people paying! Some clients prefer to have a page detailing each vulnerability, with affected assets listed under the vulnerability title. This is useful in situations where separate teams may all have responsibilities for different areas of a single asset. For example, the systems team runs the webserver, but the development team writes the code for the application hosted on it.

Although I’ve mentioned the three most common audiences for pen test reports, this isn’t an exhaustive list. Once the report is handed over to the client, it’s up to them what they do with it. It may end up being presented to auditors, as evidence that certain controls are working. It could be presented to potential customers by the sales team. “Anyone can say their product is secure, but can they prove it? We can, look here is a pen test report”.

Reports might even end up getting shared with the whole organization. It sounds crazy, but it happens. I once performed a social engineering test, the results of which were less than ideal for the client. The enraged CEO shared the report with the whole organization, as a way of raising awareness of social engineering attacks. This was made more interesting, when I visited that same company a few weeks later to deliver some security awareness training. During my introduction, I explained that my company did security testing and was responsible for the social engineering test a few weeks back. This was greeted with angry stares and snide comments about how I’d gotten them all into trouble. My response was, as always, “better to give me your passwords than a genuine bad guy”.

What should the report contain?

Sometimes you’ll get lucky and the client will spell out exactly what they want to see in the report during the initial planning phase. This includes both content and layout. I’ve seen this happen to extreme levels of detail, such as what font size and line spacing settings should be used. However, more often than not, the client won’t know what they want and it’ll be your job to tell them.

So without further ado, here are some highly recommended sections to include in pen test reports.

  • A Cover Sheet. This may seem obvious, but the details that should be included on the cover sheet can be less obvious. The name and logo of the testing company, as well as the name of the client should feature prominently. Any title given to the test such as “internal network scan” or “DMZ test” should also be up there, to avoid confusion when performing several tests for the same client. The date the test was performed should appear. If you perform the same tests on a quarterly basis this is very important, so that the client or the client’s auditor can tell whether or not their security posture is improving or getting worse over time. The cover sheet should also contain the document’s classification. Agree this with the client prior to testing; ask them how they want the document protectively marked. A penetration test report is a commercially sensitive document and both you and the client will want to handle it as such.
  • The Executive Summary. I’ve seen some that have gone on for three or four pages and read more like a Jane Austen novel than an abbreviated version of the report’s juicy bits. This needs to be less than a page. Don’t mention any specific tools, technologies or techniques used, they simply don’t care. All they need to know is what you did, “we performed a penetration test of servers belonging to X application”, and what happened, “we found some security problems in one of the payment servers”. What needs to happen next and why “you should tell someone to fix these problems and get us in to re-test the payment server, if you don’t you won’t be PCI compliant and you may get a fine”. The last line of the executive summary should always be a conclusion that explicitly spells out whether or not the systems tested are secure or insecure, “overall we have found this system to be insecure”. It could even be just a single word.

A bad way to end an executive summary: “In conclusion, we have found some areas where security policy is working well, but other areas where it isn’t being followed at all. This leads to some risk, but not a critical amount of risk.”

A better way: “In conclusion, we have identified areas where security policy is not being adhered to, this introduces a risk to the organization and therefore we must declare the system as insecure.”

  • Summary of Vulnerabilities. Group the vulnerabilities on a single page so that at a glance an IT manager can tell how much work needs to be done. You could use fancy graphics like tables or charts to make it clearer – but don’t overdo it. Vulnerabilities can be grouped by category (e.g. software issue, network device configuration, password policy), severity or CVSS score –the possibilities are endless. Just find something that works well and is easy to understand.

  • Test Team Details. It is important to record the name of every tester involved in the testing process. This is not just so you and your colleagues can be hunted down should you break something. It’s a common courtesy to let a client know who has been on their network and provide a point of contact to discuss the report with. Some clients and testing companies also like to rotate the testers assigned to a particular set of tests. It’s always nice to cast a different set of eyes over a system. If you are performing a test for a UK government department under the CHECK scheme, including the name of the team leader and any team members is a mandatory requirement.
  • List of the Tools Used. Include versions and a brief description of the function. This goes back to repeatability. If anyone is going to accurately reproduce your test, they will need to know exactly which tools you used.

  • A copy of the original scope of work. This will have been agreed in advance, but reprinting here for reference purposes is useful.
  • The main body of the report. This is what it’s all about. The main body of the report should include details of all detected vulnerabilities, how you detected the vulnerability, clear technical expiations of how the vulnerability could be exploited, and the likelihood of exploitation. Whatever you do, make sure you write your own explanations, I’ve lost count of the number of reports that I’ve seen that are simply copy and paste jobs from vulnerability scanner output. It makes my skin crawl; it’s unprofessional, often unclear and irrelevant. Detailed remediation advice should also be included. Nothing is more annoying to the person charged with fixing a problem than receiving flakey remediation advice. For example, “Disable SSL version 2 support” does not constitute remediation advice. Explain the exact steps required to disable SSL version 2 support on the platform in question. As interesting as reading how to disable SSL version 2 on Apache is, it’s not very useful if all your servers are running Microsoft IIS. Back up findings with links to references such as vendor security bulletins and CVE’s.

Getting the level of detail in a report right is a tricky business. I once wrote a report that was described as “overwhelming” because it was simply too detailed, so on my next test I wrote a less detailed report. This was subsequently rejected because it “lacked detail”. Talk about moving the goalposts. The best thing to do is spend time with the client, learn exactly who the audience will be and what they want to get out of the report.

Final delivery.

When a pilot lands an airliner, their job isn’t over. They still have to navigate the myriad of taxiways and park at the gate safely. The same is true of you and your pen test reports, just because its finished doesn’t mean you can switch off entirely. You still have to get the report out to the client, and you have to do so securely. Electronic distribution using public key cryptography is probably the best option, but not always possible. If symmetric encryption is to be used, a strong key should be used and must be transmitted out of band. Under no circumstances should a report be transmitted unencrypted. It all sounds like common sense, but all too often people fall down at the final hurdle.

Share

It’s What’s on the Inside that Counts

The last time I checked, the majority of networking and security professionals were still human.

We all know that the problem with humans is that they sometimes exhibit certain behaviors that can lead to trouble – if that wasn’t the case we’d probably all be out of a job! One such behavior is obsession.

Obsession can be defined as an idea or thought that continually preoccupies or intrudes on a person’s mind. I’ve worked with a number of clients who have had an obsession that may, as bizarrely as it seems, have had a negative impact on their information security program.

The obsession I speak of is the thought of someone “breaking in” to their network from the outside.

You’re probably thinking to yourself, how on earth can being obsessed with protecting your network from external threats have a negative impact on your security? If anything it’s probably the only reason you’d want a penetration test in the first place! I’ll admit, you’re correct about that, but allow me to explain.

Every organization has a finite security budget. How they use that budget is up to them, and this is where the aforementioned obsession can play its part. If I’m a network administrator with a limited security budget and all I think about is keeping people out of my network, my shopping list will likely consist of edge firewalls, web-application firewalls, IDS/IPS and a sprinkling of penetration testing.

If I’m a pen tester working on behalf of that network administrator I’ll scan the network and see a limited number of open ports thanks to the firewall, trigger the IPS, have my SQL injection attempts dropped by the WAF and generally won’t be able to get very far. Then my time will be up, I’ll write a nice report about how secure the network is and move on. Six or twelve months later, I’ll do exactly the same test, find exactly the same things and move on again. This is the problem. It might not sound like a problem, but trust me, it is. Once we’ve gotten to this point, we’ve lost sight of the reason for doing the pen test in the first place.

The test is designed to be a simulation of an attack conducted by a malicious hacker with eyes only for the client. If a hacker is unable to break into the network from the outside, chances are they won’t wait around for a few months and try exactly the same approach all over again. Malicious hackers are some of the most creative people on the planet. If we really want to do as they do, we need to give our testing a creativity injection. It’s our responsibility as security professionals to do this, and encourage our clients to let us do it.

Here’s the thing, because both pen testers and clients have obsessed over how hackers breaking into stuff for so long, we’ve actually gotten a lot better at stopping them from doing so. That’s not to say that there will never be a stray firewall rule that gives away a little too much skin, or a hastily written piece of code that doesn’t validate input properly, but generally speaking “breaking in” is no longer the path of least resistance at many organizations – and malicious hackers know it. Instead “breaking out” of a network is the new route of choice.

While everyone has been busy fortifying defenses on the way in to the network, traffic on the way out is seldom subject to such scrutiny – making it a very attractive proposition to an attacker. Of course, the attacker still has to get themselves into position behind the firewall to exploit this – but how? And how can we simulate it in a penetration test?

What the Pen Tester sees

The Whole Picture

On-Site Testing

There is no surer way of getting on the other side of the firewall than to head to your clients office and plugging directly into their network. This isn’t a new idea by any means, but it’s something that’s regularly overlooked in favor of external or remote testing. The main reason for this of course is the cost. Putting up a tester for a few nights in a hotel and paying travel expenses can put additional strain on the security budget. However, doing so is a hugely valuable exercise for the client. I’ve tested networks from the outside that have shown little room for enumeration, let alone exploitation. But once I headed on-site and came at those networks from a different angle, the angle no one ever thinks of, I had trouble believing they were the same entity.

To give an example, I recall doing an on-site test for a client who had just passed an external test with flying colors. Originally they had only wanted the external test, which was conducted against a handful of IPs. I managed to convince them that in their case, the internal test would provide additional value. I arrived at the office about an hour and a half early, I sat out in the parking lot waiting to go in. I fired up my laptop and noticed a wireless network secured with WEP, the SSID was also the name of the client. You can probably guess what happened next. Four minutes later I had access to the network, and was able to compromise a domain controller via a flaw in some installed backup software. All of this without leaving the car. Eventually, my point of contact arrived and said, “So are you ready to begin, or do you need me to answer some questions first?” The look on his face when I told him that I’d actually already finished was one that I’ll never forget. Just think, had I only performed the external test, I would have been denied that pleasure. Oh, and of course I would have never picked up on the very unsecure wireless network, which is kind of important too.

This is just one example of the kind of thing an internal test can uncover that wouldn’t have even been considered during an external test. Why would an attacker spend several hours scanning a network range when they could just park outside and connect straight to the network?

One of my favorite on-site activities is pretending I’m someone with employee level access gone rogue. Get on the client’s standard build machine with regular user privileges and see how far you can get on the network. Can you install software? Can you load a virtual machine? Can you get straight to the internet, rather than being routed through a proxy? If you can, there are a million and one attack opportunities at your fingertips.

The majority of clients I’ve performed this type of test for hugely overestimated their internal security. It’s well documented that the greatest threat comes from the inside, either on purpose or by accident. But of course, everyone is too busy concentrating on the outside to worry about what’s happening right in front of them.

Good – Networks should be just as hard to break out of, as they are to break in to.

Fortunately, some clients are required to have this type of testing, especially those in government circles. In addition, several IT security auditing standards require a review of internal networks. The depth of these reviews is sometimes questionable though. Auditors aren’t always technical people, and often the review will be conducted against diagrams and documents of how the system is supposed to work, rather than how it actually works. These are certainly useful exercises, but at the end of the day a certificate with a pretty logo hanging from your office wall won’t save you when bad things happen.

Remote Workers

Having a remote workforce can be a wonderful thing. You can save a bunch of money by not having to maintain a giant office and the associated IT infrastructure. The downside of this is that in many organizations, the priority is getting people connected and working, rather than properly enforcing security policy. The fact is that if you allow someone to connect remotely into the heart of your network with a machine that you do not have total control over, your network is about as secure as the internet. You are in effect extending your internal network out past the firewall to the unknown. I’ve seen both sides of the spectrum, from an organization that would only allow people to connect in using routers and machines that they configured and installed, to an organization that provided a link to VPN client and said “get on with it”.

I worked with one such client who was starting to rely on remote workers more and more, and had recognized that this could introduce a security problem. They arranged for me to visit the homes of a handful of employees and see if I could somehow gain access to the network’s internal resources. The first employee I visited used his own desktop PC to connect to the network. He had been issued a company laptop, but preferred the big screen, keyboard and mouse that were afforded to him by his desktop. The machine had no antivirus software installed, no client firewall running and no disk encryption. This was apparently because all of these things slowed it down too much. Oh, but it did have a peer-to-peer file sharing application installed. No prizes for spotting the security risks here.

In the second home I visited, I was pleased to see the employee using her company issued XP laptop. Unfortunately she was using it on her unsecured wireless network. To demonstrate why this was a problem, I joined my testing laptop to the network, fired up a Metasploit session and hit the IP with my old favorite, the MS08-067 NetAPI32.dll exploit module. Sure enough, I got a shell, and was able to pivot my way into the remote corporate network. It was at this point that I discovered the VPN terminated in a subnet with unrestricted access to the internal server subnet. When I pointed out to the client that there really should be some sort of segregation between these two areas, I was told that there was. “We use VLAN’s for segregation”, came the response. I’m sure that everyone reading this will know that segregation using VLAN’s, at least from a security point of view, is about as useful as segregating a lion from a Chihuahua with a piece of rice paper. Ineffective, unreliable and will result in an unhappy ending.

Bad – The VPN appliance is located in the core of the network.

Social Engineering

We all know that this particular activity is increasing in popularity amongst our adversaries, so why don’t we do it more often as part of our testing? Well, simply put, a lot of the time this comes down to politics. Social engineering tests are a bit of a touchy subject at some organizations, who fear a legal backlash if they do anything to blatantly demonstrate how their own people are subject to the same flaws as the seven billion other on the planet. I’ve been in scoping meetings when as soon as the subject of social engineering has come up, I’m stared at harshly and told in no uncertain terms, “Oh, no way, that’s not what we want, don’t do that.” But why not do it? Don’t you think a malicious hacker would? You’re having a pen test right? Do you think a malicious hacker would hold off on social engineering because they haven’t gotten your permission to try it? Give me a break.

On the other hand, I’ve worked for clients who have recognized the threat of social engineering as one of the greatest to their security, and relished at the opportunity to have their employees tested. Frequently, these tests result in a greater than 80% success rate. So how are they done?

Well, they usually start off with the tester registering a domain name which is extremely similar to the client’s. Maybe with one character different, or a different TLD (“.net” instead of “.com” for example).

The tester’s next step would be to set up a website that heavily borrows CSS code from the client’s site. All it needs is a basic form with username and password fields, as well as some server side coding to email the contents of the form to the tester upon submission.

With messages like this one in an online meeting product, it’s no wonder social engineering attacks are so successful.

Finally, the tester will send out an email with some half-baked story about a new system being installed, or special offers for the employee “if you click this link and login”. Sit back and wait for the responses to come in. Follow these basic steps and within a few minutes, you’ve got a username, password and employee level access. Now all you have to do is find a way to use that to break out of the network, which won’t be too difficult, because everyone will be looking the other way.

Conclusion

The best penetration testers out there are those who provide the best value to the client. This doesn’t necessarily mean the cheapest or quickest. Instead it’s those who make the most effective use of their relatively short window of time, and any other limitations they face to do the job right. Never forget what that job is, and why you are doing it. Sometimes we have to put our generic testing methodologies aside and deliver a truly bespoke product. After all, there is nothing more bespoke than a targeted hacking attack, which can come from any direction. Even from the inside.

Share

The Common Vulnerability Scoring System

Introduction

This article presents the Common Vulnerability Scoring System (CVSS) Version 2.0, an open framework for scoring IT vulnerabilities. It introduces metric groups, describes base metrics, vector, and scoring. Finally, an example is provided to understand how it works in practice. For a more in depth look into scoring vulnerabilities, check out the ethical hacking course offered by the InfoSec Institute.

Metric groups

There are three metric groups:

I. Base (used to describe the fundamental information about the vulnerability—its exploitability and impact).
II. Temporal (time is taken into account when severity of the vulnerability is assessed; for example, the severity decreases when the official patch is available).
III. Environmental (environmental issues are taken into account when severity of the vulnerability is assessed; for example, the more systems affected by the vulnerability, the higher severity).

This article is focused on base metrics. Please read A Complete Guide to the Common Vulnerability Scoring System Version 2.0 if you are interested in temporal and environmental metrics.

Base metrics

There are exploitability and impact metrics:

I. Exploitability

a) Access Vector (AV) describes how the vulnerability is exploited:
- Local (L)—exploited only locally
- Adjacent Network (A)—adjacent network access is required to exploit the vulnerability
- Network (N)—remotely exploitable

The more remote the attack, the more severe the vulnerability.

b) Access Complexity (AC) describes how complex the attack is:
- High (H)—a series of steps needed to exploit the vulnerability
- Medium (M)—neither complicated nor easily exploitable
- Low (L)—easily exploitable

The lower the access complexity, the more severe the vulnerability.

c) Authentication (Au) describes the authentication needed to exploit the vulnerability:
- Multiple (M)—the attacker needs to authenticate at least two times
- Single (S)—one-time authentication
- None (N)—no authentication

The lower the number of authentication instances, the more severe the vulnerability.

II. Impact

a) Confidentiality (C) describes the impact of the vulnerability on the confidentiality of the system:
- None (N)—no impact
- Partial (P)—data can be partially read
- Complete (C)—all data can be read

The more affected the confidentiality of the system is, the more severe the vulnerability.

+b) Integrity (I) describes an impact of the vulnerability on integrity of the system:
- None (N)—no impact
- Partial (P)—data can be partially modified
- Complete (C)—all data can be modified

The more affected the integrity of the system is, the more severe the vulnerability.

c) Availability (A) describes an impact of the vulnerability on availability of the system:
- None (N)—no impact
- Partial (P)—interruptions in system’s availability or reduced performance
- Complete (C)—system is completely unavailable

The more affected availability of the system is, the more severe the vulnerability.

Please note the abbreviated metric names and values in parentheses. They are used in base vector description of the vulnerability (explained in the next section).

Base vector

Let’s discuss the base vector. It is presented in the following form:

AV:[L,A,N]/AC:[H,M,L]/Au:[M,S,N]/C:[N,P,C]/I:[N,P,C]/A:[N,P,C]

This is an abbreviated description of the vulnerability that brings information about its base metrics together with metric values. The brackets include possible metric values for given base metrics. The evaluator chooses one metric value for every base metric.

Scoring

The formulas for base score, exploitability, and impact subscores are given in A complete Guide to the Common Vulnerability Scoring System Version 2.0 [1]. However, there in no need to do the calculations manually. There is a Common Vulnerability Scoring System Version 2 Calculator available. The only thing the evaluator has to do is assign metric values to metric names.

Severity level

The base score is dependent on exploitability and impact subscores; it ranges from 0 to 10, where 10 means the highest severity. However, CVSS v2 doesn’t transform the score into a severity level. One can use, for example, the FortiGuard severity level to obtain this information:

FortiGuard severity level CVSS v2 score
Critical 9 – 10
High 7 – 8.9
Medium 4 – 6.9
Low 0.1 – 3.9
Info 0

Putting the pieces together

An exemplary vulnerability in web application is provided to better understand how Common Vulnerability Scoring System Version 2.0 works in practice. Please keep in mind that this framework is not limited to web application vulnerabilities.

Cross-site request forgery in admin panel allows adding a new user and deleting an existing user or all users.

Let’s analyze first the base metrics together with the resulting base vector:

Access Vector (AV): Network (N)
Access Complexity (AC): Medium (M)
Authentication (Au): None (N)

Confidentiality (C): None (N)
Integrity (I): Partial (P)
Availability (A): Complete (C)

Base vector: (AV:N/AC:M/Au:N/C:N/I:P/A:C)

Explanation: The admin has to visit the attacker’s website for the vulnerability to be exploited. That’s why the access complexity is medium. The website of the attacker is somewhere on the Internet. Thus the access vector is network. No authentication is required to exploit this vulnerability (the admin only has to visit the attacker’s website). The attacker can delete all users, making the system unavailable for them. That’s why the impact of the vulnerability on the system’s availability is complete. Deleting all users doesn’t delete all data in the system. Thus the impact on integrity is partial. Finally, there is no impact on the confidentiality of the system provided that added user doesn’t have read permissions on default.

Let’s use the Common Vulnerability Scoring System Version 2 Calculator to obtain the subscores (exploitability and impact) and base score:

Exploitability subscore: 8.6
Impact subscore: 7.8
Base score: 7.8

Let’s transform the score into a severity level according to FortiGuard severity levels:

FortiGuard severity level: High

Summary

This article described an open framework for scoring IT vulnerabilities—Common Vulnerability Scoring System (CVSS) Version 2.0. Base metrics, vector and scoring were presented. An exemplary way of transforming CVSS v2 scores into severity levels was described (FortiGuard severity levels). Finally, an example was discussed to see how all these pieces work in practice.

Dawid Czagan is a security researcher for the InfoSec Institute and the Head of Security Consulting at Future Processing.

Share

The Biggest Gap in Information Security is…?

As a person who’s committed to helping raise awareness in the security community as a whole, I’ve often found myself asking this question. While there are several issues that I think contribute to the state of information security today, I’m going to outline a few of the major ones.

One major problem that spans every industry group from government to finance, all the way over to retail, is the massive amounts of data stored, the large number of devices to manage and frankly, not enough people to do it all. Or not enough people with the appropriate level of security skills to do it. I recently had a student in an Ethical Hacking class who asked me if I would be open to discussing some things in private with him concerning some issues he had at work. During dinner he confided in me that he sees his job as becoming more and more impossible with all the security requirements. He let me know that he had recently completed a penetration test within his company and felt he didn’t really get anything out of it. My first question was how many nodes were in the scope of the test. His response was 20,000. So naturally my next question was how big was his pen test team. To that he looked at me blankly and said “It was just me”. My next question was how long did he have to complete the test. And to that his reply was 3 days. This shocked me greatly and I candidly let this individual know that with a scope that big it will usually take one person more than three days to do proper discovery and recon and wouldn’t even give you time to even start vulnerability discovery, mapping, and exploitation testing/development.  I also informed him that for a job like that I usually deploy 3 people and usually contract a time of 2 to 4 weeks. Keep in mind this young man was a very intelligent and skilled person, but he lacked the skills to pull this off. After more conversation I realized that he himself was responsible for scoping the 3 day time to complete the test.

This brings me to the first main point; I see a trend of corporations and entities placing more security responsibility on individuals without giving them enough resources or training. This person admitted he really didn’t even have the skills to know how long it would take him and he based his time estimate off something he found on the web using google, which was why he was in the class. After the class he emailed me and thanked me for finally giving him the understanding to realize what it would take to successfully complete his internal testing. He drafted a plan for a 4 week test and put in a request to have temporary help for the 4 week duration. 2 months later he sent me another email and a redacted copy of the penetration test (after I signed a NDA of course). I was impressed with his work and let him know that. This demonstrated that even the most intelligent people can become overwhelmed if put into an impossible situation with no tools.

Second is the increasingly swift changing threat models. What would be considered a very secure computer 10 years ago (basic firewall, and up to date anti-virus) would be considered a joke today. I can remember when OS patches were mostly just non-security related bug fixes. If the bug didn’t affect you, you didn’t worry about the patch since it often broke other things. This way of thinking became the norm, and still exists in some places today. Add to that the web based attack vectors and client side attacks, it gets even more detrimental. I watched as Dan Kaminsky wrote himself into the infosec history books with his DNS attack. At the same time I saw one pen test customer after the other totally ignore it. Once we were able to exploit this in their environment we usually got responses like “i thought this mostly affected public/root dns servers”. The bottom line is DNS is DNS, internal or external. While Dans’ demonstration was impressive, thorough and concise, it left the average IT admin lost in the weeds. As humans when we don’t truly understand things we typically either do nothing, or do the wrong things. A lot of the media coverage of this vulnerability mostly focused on the public side threat. So from a surface look, it appeared to be something for “others” to worry about. Within weeks of that presentation there were new mobile device threats identified, new adobe reader threats, and many other common application vulnerabilities were identified. With all these “critical” things identified and disclosed within weeks of each other, it is apparent why some security professionals feel overwhelmed and behind the curve! Throw in the fact that I’m learning from clients and students alike that they’re now expected to be able to perform forensics investigations, and the weeds get deeper.

The last thing I want to point out is a trend I’ve noticed in recent years. The gap between what I like to call the “elite” of the information security world and the average IT admin or average whitehat/security professional is bigger than it’s ever been. Comments I’ve heard is “I went to blackhat and I was impressed with all of what I witnessed, but I don’t truly understand how it works and what to really do about it”. I think part of this is due to the fact that some in the information security community assume their audience should have a certain level of knowledge and refuse to back off that stance.

Overall I think the true gap is in knowledge. Often times individuals are not even sure what knowledge is required to perform their job.  Check back soon as I’ll be sharing some ideas as to how to address this problem.

Keatron Evans, one of the two lead authors of “Chained Exploits: Advanced Hacking Attacks From Start to Finish”, is a Senior Instructor and Training Services Director at the InfoSec Institute.

Share

10 Skills Needed to be a Successful Pentester

  1. Mastery of an operating system. I can’t stress how important it is. So many people want to become hackers or systems security experts, without actually knowing the systems they’re supposed to be hacking or securing. It’s common knowledge that once you’re on a target/victim, you need to somewhat put on the hat of a sysadmin. After all, having root means nothing if you don’t know what to do with root. How can you cover your tracks if you don’t even know where you’ve left tracks? If you don’t know the OS in detail, how can you possibly know everywhere things are logged?
  2. Good knowledge of networking and network protocols. Being able to list the OSI model DOES NOT qualify as knowing networking and network protocols. You must know TCP in and out. Not just that it stands for Transmission Control Protocol, but actually know that structure of the packet, know what’s in it, know how it works in detail. A good place to start is TCP/IP Illustrated by W. Richard Stevens (either edition works). Know the difference between TCP and UDP. Understand routing, be able to in detail describe how a packet gets from one place to another. Know how DNS works, and know it in detail. Understand ARP, how it’s used, why it’s used. Understand DHCP. What’s the process for getting an automatic IP address? What happens when you plug in? What type of traffic does your NIC generate when it’s plugged in and tries to get an automatically assigned address? Is it layer 2 traffic? Layer 3 traffic?
  3. If you don’t understand the things in item 2, then you can’t possibly understand how an ARP Spoof or a MiTM attack actually works. In short how can you violate or manipulate a process, if you don’t even know how the process works, or worse, you don’t even know the process exists! Which brings me to the next point. In general you should be curious as to how things work. I’ve evaluated some awesome products in the last 10 years, and honestly, after I see it work, the first thing that comes to my mind is “how does it work”.
  4. Learn some basic scripting. Start with something simple like vbs or Bash. As a matter of fact, I’ll be posting a “Using Bash Scripts to Automate Recon” video tonight. So if you don’t have anywhere else to start, you can start there! Eventually you’ll want to graduate from scripting and start learning to actually code/program or in short write basic software (hello world DOES NOT count).
  5. Get yourself a basic firewall, and learn how to configure it to block/allow only what you want. Then practice defeating it. You can find cheap used routers and firewalls on ebay, or maybe ask your company for old ones. Start with simple ACL’s on a router. Learn how to scan past them using basic IP spoofing and other simple techniques. There’s not better way to understand these concepts than to apply them. Once you’re mastered this, you can move to a PIX, or ASA and start the process over again. Start experimenting with trying to push Unicode through it, and other attacks. Spend time on this site and other places to find info on doing these things. Really the point is to learn to do them.
  6. Know some forensics! This will only make you better at covering your tracks. The implications should be obvious.
  7. Eventually learn a programming language, then learn a few more. Don’t go and by a “How to program in C” book or anything like that. Figure out something you want to automate, or think of something simple you’d like to create. For example, a small port scanner. Grab a few other port scanners (like nmap), look at the source code, see if you can figure any of it out. Then ask questions on forums and other places. Trust me, it’ll start off REALLY shaky, but just keep chugging away!
  8. Have a desire and drive to learn new stuff. This is a must; It’s probably more important than everything else listed here. You need to be willing to put in some of your own time (time you’re not getting paid for), to really get a handle on things and stay up to date.
  9. Learn a little about databases, and how they work. Go download mysql, read some of the tutorials on how to create simple sample databases. I’m not saying you need to be a DB expert, but knowing the basic constructs help.
  10. Always be willing to interact and share your knowledge with like minded professionals and other smart people. Some of the most amazing hackers I know have jobs like pizza delivery, janitorial, one is a marketing exec, another is actually an MD. They do this strictly because they love to. And one thing I see in them all is their excitement and willingness to share what they’ve learned with people who actually care to listen and are interested in the same.

Keatron Evans is a Senior Instructor for InfoSec Institute. InfoSec Institute is a security certification company that has trained over 15,000 people including popular CEH and CCNA certification courses.

 

Share

Securing the Software Development Environment

In the February 2012 edition of Computer, a sidebar to an article on “Web Application Vulnerabilities” asks the question: “Why don’t developers use secure coding practices?” [1] The sidebar provides the typical cliches that programmers feel constrained by security practices and suggests that additional education will correct the situation. Another magical solution addressing security concerns is to introduce a secure development process. However, going from improved security education or a new secure development process requires a plan to connect the current development processes to one that is more secure, as the cartoon suggests. Instead of looking for a single solution, another approach is to identify the threat agents, threats, vulnerabilities, and exposures. After identifying these, the next step is to establish a cost-effective security policy that will provide safeguards.

Many view programmers as the primary threat agent in a development environment, however Microsoft reports that more than 50% of the security defects reported were introduced in the design of a component [2]. Microsoft’s finding suggests that both designers and programmers are threat agents. According to Microsoft’s data, designers and programmers introduce vulnerabilities into an application; it is therefore appropriate to identify all of the software development roles (analysts, designers, programmers, testers) as potential threat agents. Viewing software developers as threat agents should not imply that the individuals filling these roles are careless or criminal, but they have the greatest opportunity to introduce source code compromising the confidentiality, integrity, or availability of a computer system.

Software developers can expose assets accidentally by introducing a defect. Defects have many causes, such as oversight or lack of experience with a programming language, and are a normal part of the development process. Quality Assurance (QA) practices, such as inspections and unit testing, focus on eliminating defects from the delivered software. Developers can also expose assets intentionally by introducing malicious functionality [3]. Malicious functionality can take the form of a variety of attacks, such as worms, Trojans, salami fraud, and other types of attacks [4]. A salami fraud is an attack in which the perpetrators take a small amount of an asset at a time, such as the “collect the round-off” scam [5]. An individual interested in introducing illicit functionality will exploit any available vulnerability. Identifying all of the potential exposures and creating safeguards provides a significant challenge to the security analysts,but by analyzing the development process, it is possible to identify a number of cost-effective safeguards.

Addressing these exposures, many researchers recommend enhancing an organization’s QA program. One frequent recommendation is to expand their inspection practice by introducing a checklist for the various exposures provided by the programming languages used by developers [2]. Items added to a security inspection checklist typically include functions such as Basic’s Peek() and Poke() functions, C’s string copy functions, exception handling routines, and programs executing at a privileged level [2]. Functions like Peek() and Poke() make it easier for a programmer to access memory outside of the program, but a character array or table without bounds checking produces similar results. A limitation of the language-specific inspection checklist is that each language used to develop the application must have a checklist for the language. For some web applications, this could require three or more inspection checklists, and this may not provide safeguards for all of the vulnerabilities. Static analyzers, such as the SEMATE research, being sponsored by the National Institute of Standards and Technology (NIST), is an approach automating some of the objectives associated with an inspection checklist, but static analyzers have a reputation for flagging source statements that are not actually problems [6].

Using a rigorous inspection process as a safeguard will identify many defects, but it will not adequately protect from exposures due to malicious functionality. An inspection occurring before the source code is placed under configuration control provides substantial exposure. In this situation, the developer simply adds the malicious functionality after the source code passes inspection or provides the inspection team a listing not containing the malicious functionality. Figure 1 illustrates a traditional unit-level development process containing this vulnerability.

As illustrated in Figure 1, a developer receives a changer authorization to begin in the modification or implementation of a software unit. Generally, the “authorization” is verbal, and the only record of the authorization appears on a developer’s progress report or the supervisor’s project plan. To assure that another developer does not update the same source component, the developer “reserves” the necessary source modules. Next, the developer modifies the source code to have the necessary features. When all of the changes are complete, the developer informs the supervisor who assembles a review panel consisting of 3 to 5 senior developers and/or designers. The panel examines the source code to evaluate the logic and documentation in the source code. A review committee can recommend that the developer make major changes to the source code that will require another review, minor changes that do not require a full review, or no changes and no further review. It is at this point in the development process where the source code is the most vulnerable to the introduction of malicious functionality, because there are no reviews or checks before the software is “checked-in”.

Another limitation of inspections is that the emerging Agile methodologies recommend formal inspections. Development methodologies, such as eXtreme programming, utilizes pair-programming and Test Before Design concepts in lieu of inspections, and Scrum focuses on unit testing for defect identification [7, 8]. Using inspections as the primary safeguard from development exposures limits the cost savings promised by these new development methodologies and does not provide complete protection from a developer wishing to introduce malicious software.

Programming languages and the development process offer a number of opportunities to expose assets, but many of the tools, such as debuggers and integrated development environments, can expose an asset to unauthorized access. Many development tools operate at the same protection level as the operating system kernel and function quite nicely as a worm to deposit a root kit or other malicious software. Another potential exposure, not related to programming languages, is “production” data for testing. Using “production” data may permit access to information that the developers do not have a need to know. Only a comprehensive security policy focusing on personnel, operation, and configuration management can provide the safeguards necessary to secure an organization’s assets.

Many organizations conduct background checks, credit checks, and drug tests when hiring new employees as part of their security policy. Security clearances issued by governmental agencies have specific terms; non-governmental organizations should also re-screen development personnel periodically. Some would argue that things like random drug tests and periodic security screenings are intrusive, and they are. However, developers need to understand that just as organizations use locks on doors to protect their physical property, they need to conduct periodic security screenings to protect intellectual property and financial assets from those that have the greatest access.

Another element of a robust development security policy is to have separate development and production systems. Developing software in the production environment exposes organizational assets to a number of threats, such as debugging tools or simply writing a program to gain unauthorized access to information stored on the system. Recent publicity on the STUXNET worm suggests that a robust development security policy will prohibit the use of external media, such as CD’s, DVD’s, and USB devices [9]. Another important point about the STUXNET worm is that it targeted a development tool, and the tool introduced the malicious functionality.

Configuration management is the traditional technique for controlling the content of deliverable components and is an essential element of a robust security policy [10]. Of the six areas of Configuration Management, the two areas having the greatest effect on security are configuration control and configuration audits. Version control tools, such as Clearcase and CVS, provide many of the features required by configuration control. A configuration audit is an inspection occurring after all work on a configuration item is complete, and it assures that all of the physical elements and process artifacts of the configuration item are in order.

Version control tools prevent two or more programmers from over-writing each other’s changes. Most version control systems permit anyone with authorized access to check source code “in” and “out” without an authorized change request, and some do not even track the last access to a source module. However, in a secure environment, a version control system must integrate with the defect tracking system and record the identification of the developers who accessed a specific source module. Integrating the version control system with the defect tracking system permits only the developer assigned to make a specified change to have access to the related source code. It is also important for the version control system to track the developers that access the source. Frequently, developers copy source code from a tested component or investigate the approach used by another developer to address a specific issue and need access to read source modules that they are not maintaining. This also provides a good research tool to introduce malicious functionality into another source module. By logging source module access, security personnel can monitor access to the source code.

Configuration audits are the second management technique making a development organization more secure. Audits range in formality from a clerk using a checklist verifying that all of the artifacts required for a configuration item are submitted, to a multi-person team assuring that software delivered produces the submitted artifacts and the tests adequately address risks posed by the configuration item [11]. Some regulatory agencies require audits for safety critical applications/high reliability applications to provide an independent review of the delivered product. An audit in a high security environment addresses the need to assure that delivered software does not expose the organizational assets to risk from either defects or malicious functionality. Artifacts submitted with a configuration item can include, but are not limited to, requirements or change requests implemented, design specification, test-script(s) source code, test data, test results and the source code for the configuration item. To increase confidence that the delivered software does not contain defects or malicious functionality, auditors should assure that the test cases provides 100% coverage of the delivered source code. This is particularly important with interpreted programming languages, such as python or other scripting languages, because a defect can permit the entry of malicious code by a remote use of the software. Another approach auditors can use to assure coverage is to re-test the configuration item with the same test data to assure that the results from the re-test match those produced in the verification and validation procedure.

Adopting these recommendations for a stronger configuration management process modifies the typical unit-level development process, illustrated in Figure 1, to a more secure process illustrated in Figure 2. In the more secure process, a formal change authorization is generated by a defect tracking system or by the version control system’s secure change authorization function. Next, a specified developer makes the changes required by the change authorization. After implementing and testing the changes, the developer checks all of the artifacts (source code, test drivers, and results) into the version control system. Checking the artifacts automatically triggers a configuration audit of the development artifacts. Auditors may accept the developer’s changes or create a new work order for additional changes. Unlike the review panel, the auditors may re-test the software to assure adequate coverage and that the test results match those checked in with the source code. Making this change to the development process significantly reduces the exposure to accidental defects or malicious functionality because it is verifying the source code deployed in the final product with all of its supporting documentation.

Following all of these recommendations will not guarantee the security of the software development environment because there are always new vulnerabilities from social engineering. However, using reoccurring security checks, separating developers from production systems and data, controlling media, and using rigorous configuration management practices should make penetration of your information security perimeter more difficult. It is also necessary to conduct a periodic review of development tools and configuration management practices because threat agents will adapt to any safeguard that does not adapt to new technology.

Article by Dr. Carl Mueller, a contributor at InfoSec Institute. InfoSec Institute has been training Information Security Professionals since 1998 with a diverse lineup of relevant training courses.

References:

  • [1]    N. Antunes and M. Vieira, “Defending against Web Application Vulnerabilites,” Computer, vol. 45, pp. 66-72, 2012.
  • [2]    N. Davis, W. Humphrey, S. T. R. Jr., G. Zibulski, and G. McGraw, “Processes for Producing Secure Software: Summary of US National Cybersecurity Summit Subgroup Report,” IEEE Security and Privacy, vol. 2, pp. 18-25, 2004.
  • [3]    G. McGraw and G. Morrisett;, “Attacking Malicious Code: A Report to the Infosec Research Council ” IEEE Softw., vol. 17, pp. 33-41, Sept.-Oct. 2000 2000.
  • [4]    M. E. Kabay, “A Brief History of Computer Crime: An Introduction for Students,” ed, 2008.
  • [5]    M. E. Kaybe. (2002, Salami fraud Network World Security Newsletter.
  • [6]    (2012, Apr 14). SAMATE – Software Assurance Metrics And Tool Evaluation.
  • [7]    K. Schwaber and M. Beedle, Agile Software Development with Scrum. Upper Saddle River, NJ: Prentice-Hall, Inc., 2002.
  • [8]    K. Beck and C. Andres, Extreme Programming Explained: Embrace Change (2nd Edition): Addison-Wesley Professional, 2004.
  • [9]    R. Langner, “Stuxnet: Dissecting a Cyberwarfare Weapon,” IEEE SECURITY & PRIVACY, pp. 49-51, 2011.
  • [10]    A. Leon, A guide to software configuration management: Artech House, Inc., 2000.
  • [11]    N. R. Nielsen, “Computers, security, and the audit function,” presented at the Proceedings of the May 19-22, 1975, national computer conference and exposition, Anaheim, California, 1975.
Share

The Evolution of a Technical Information Professional

During my years of work as a consultant and trainer in the information security world, I’ve noticed a few patterns that usually exist in those who do very well in the industry vs those who just make it by. I decided to draft this article to share some of the key elements and more importantly, give somewhat of a metric to gauge where you the reader currently sit.

Basically there seems to be 4 key levels that I consider to be different milestones or “levels of understanding” as related to this field. I originally heard a concept like this many years ago as it relates to music. Now I’m going to relate this more directly to penetration testing and exploit writing, but you can apply it to any area of specialization in information security.

Let’s start with Level 1.

Level 1 – Interested Newbie – Unknowing and Unconscious

You don’t know really how to learn these arts, plus you’re unconscious of the fact that you don’t know.

This level is where you’ve probably got your Security+ or you’ve gained the equivalent knowledge base by reading and “tinkering”. You haven’t learned how to exploit anything yet. You know what port scanning is, but you’ve never really done it. You’re familiar with the terms Trojan, malware, rootkit, exploitation, etc. But you haven’t actually had hands on with any of this, at least not knowingly **smile**.

Eventually you start playing with some tools. If you’re a person who’s said, “I downloaded Backtrack but I haven’t figured out how to do anything with it yet,” then you most likely fall into this category. Linux is still a big dark scary cloud for you (if you come from a Windows background), and vice versa if you’re a Linux background person. You might have even taken a CEH class, and you feel like you saw a lot of cool stuff, but you can’t really sit down and reproduce much of it.

Level 2 – Practicing Youngster (a year or two in) Knowing and Unconscious

You now know a little bit about how to learn these techniques but you’re still unconscious of what you don’t know.

You’re still new to the field. You might have a job that requires you to work with firewalls a little or maybe support of some type. Your inner hacker curiosity has you spending lots of time tinkering with security tools and techniques even though it might not be part of your job. You can run some security tools. You are not “too” afraid of Linux anymore. You’ve been able to get Backtrack to communicate with your network. You’ve learned how to set IP addresses in Linux, and you’re comfortable doing basic things from the shell. You have learned how to use Nmap somewhat. Additionally you’ve also found one or two forums which you like to visit and learn new things from.

The information in these forums end up being pretty basic, but at the level you’re at right now, you’ve found the more advanced forums like the official Metasploit one to be too technical for you. You maybe visit your first Blackhat/Defcon conference. You leave there realizing for the first time how much you really don’t know. You reach a point of information overload. You enjoy the conference and see lots of eye-popping demonstrations, but you don’t really understand how it works or what the implications really are. You leave Blackhat with the gut punching realization that you lack the technical ability to demonstrate or recreate anything you’ve witnessed. And this is where you actually start to learn.

Level 3 – Serious Practitioner – Knowing and Conscious

At this point you know how to learn the skills, and you’re conscious of the fact that there are limitless amounts of stuff you don’t know, and additionally you have an idea about the many different aspects and fields within information security. You truly grasp that reverse engineering, exploit writing, and penetration testing are not one big blob of variations of the same thing. You realize that they can all complement each other but they’re not the same. You have gained enough skills to be great one day, but you might not ever truly have the time, or invest the time required, to get to the next level.

It’s been a couple of years or so since you went to Blackhat/Defcon the first time. Now you go back and you understand exponentially more than you did the first time. You’re able to come back and duplicate most of what you’ve seen in the presentations. You also understand what you’ve seen well enough to demo and present it to others. If you’ve never learned to code you have at this point realized that it’s going to hinder you at some point in your career. You’ve started to learn some scripting languages and you’re pretty good with them (Perl, Python, etc). You’re aggressively trying to learn C not C+, C++, C# or any of those, but just C. Why? Because a respected security professional told you that you really needed to learn it.

You’re also trying to learn Assembly because you’ve been told that you really needed to know it to write exploits. You view exploit writing and reversing as the next thing you want to accomplish from a learning perspective. But you’ve realized you need to know programming concepts and constructs well to truly reverse and write exploits. You are able to follow exploit writing examples without problem, but your understanding of memory, calls, packing, etc. keeps you from doing it “for real”. If someone gave you a Backtrack CD and a couple of Windows computers and asked you to demonstrate a client side exploit, a server side exploit, and how scanning works, you could do it with no problem. You know TCP and IP like you know your name. You can look at a packet capture and instantly pick out three-way handshakes and other session establishments. You still don’t really know web applications that well, because you still don’t know programming and applications that well. You can demonstrate fluently all of the OWASP top 10, but you still feel there’s a lot missing.

Congratulations, you’ve reached the point where most security professionals stop or plateau.

Level 4 – Expert – Knowing and Unconscious

You are above most in both skill and knowledge. You know that there are things you don’t know, but you learn them frequently. It’s almost as if it’s a drug to you. You sit with your laptop daily/nightly and plug into forums, YouTube videos, presentations, coding etc. Every night for you seems as if you’ve plugged your brain into the Matrix and had information dumped into it.

While you know there are things you want to learn, you don’t even know or bother to figure out “how” you’re learning them. Your skills are mature enough to where you just “do”. You learn without knowing how. When you do present or demonstrate stuff to others, you’re often told that you go way too fast and really, you assume that your audience understands more than they actually do.

There is no looking back now. The only thing that drives you really is learning more. You’re also very much into finding new exploits, and finding new ways to use old exploits. While information security may or may not be your job, it is now your passion.

Level 5 – Leader – Unknowing and Unconscious

You are now at the very top of the field. Whether the rest of the world knows it or not is not relevant. You are not conscious of what you don’t know because you simply don’t care. You have obtained a body of knowledge that puts you in a position to where if you want to learn something, you simply learn it. Nothing about exploitation, or information security seems out of your reach. The only reason you don’t learn something is because you don’t want to. You are now a creator, a driver, and an industry shifter. You are one of the people who puts out what others must learn. The industry doesn’t control what you need to know, you control what the industry needs to know. A few names come to mind for me: HD Moore, Dan Kaminsky and others. For example, Metasploit, the brainchild of HD Moore, literally changed exploitation and exploit development forever. Dan Kaminsky’s DNS research a few years ago caused visible shifts in attention paid to infrastructure security as related to things like DNS.

Most people will never make it to this level. Not because people aren’t smart enough. It’s because maybe you won’t be able to put the time in. Or maybe you won’t have access to resources needed (some countries filter all Internet traffic). To say HD Moore accomplished what he has simply because he is smart would completely ignore all the obvious huge amounts of time and hard work he’s put in over the years. I think one has to have certain proclivities to reach this level, but I the time investment is more important than anything else.

This post was written by Mike Sheward, a contributor to InfoSec Resources. InfoSec Institute is the best source for high quality information security training.

Share

Which Security Certification Should I Get?

When it comes to deciding what security certifications to pursue, IT professionals should understand that they will be better off career-wise if they ask—and then answer—the right questions before choosing.

So says Chuck Davis, who as an adjunct professor at Harrisburg University of Science and Technology in Pennsylvania teaches ethical hacking and computer forensic classes. Currently a senior security architect at a Fortune 500 company, Prof. Davis has earned the Master of Science in Information Assurance at Norwich University, the Certified Information Systems Security Professional (CISSP) credential and the Information Systems Security Architecture Professional
(ISSAP) credential. He insists that there is no one-size-fits-all game plan for IT professionals looking for the right security certifications to earn.

“I would suggest that if you’re someone who is new to security, maybe just out of college or you’ve been working in IT and want to move into security, studying and working towards the CISSP is a good [move],” says Prof. Davis, who earned his CISSP and ISSAP from (ISC)². “I believe the CISSP is considered kind of the gold standard for a lot of professionals. What the CISSP does is it gives a very wide breadth of curriculum.”

According to Prof. Davis, IT professionals need to reflect on things such as where they are in their careers and what their objectives are before they can knowledgeably select the right security certifications. Josh Lochner, a senior risk management consultant at SecureState in Ohio, is also a proponent of this view. He insists that there are a handful of questions that IT professionals need to ask themselves before choosing. Meanwhile, Carmen Buruiana, human resource manager for Bitdefender in Romania, argues that possessing the right skill set and attitude is more important than having specific certifications.

While money certainly isn’t everything, many IT professionals who are weighing the pros and cons of different security certifications would no doubt factor salaries into the decision-making equation. And, fortunately, there are resources available that provide some indication of which security certifications can
be the most rewarding from a financial perspective.

For instance, Foote Partners’ “IT Skills and Certification Pay Index – Q3 2011 edition” indicates that the following security certifications translate into the highest pay premiums:

  • Certified Information Systems Security Professional (CISSP)
  • Information Systems Security Engineering Professional (CISSP/ISSEP)
  • IACRB Certified Penetration Tester (CPT
  • CyberSecurity Forensic Analyst
  • Certified Information Security Manager (CISM)
  • Certified Information Systems Auditor (CISA)
  • Cisco Security Solutions and Design Specialist
  • IACRB Certified Reverse Engineering Analyst (CREA)
  • GIAC Secure Software Programmer –Java
  • GIAC Systems and Network Auditor (GSNA)
  • Information Systems Security Architecture Professional (CISSP/ISSAP)
  • Security Certified Network Architect
  • Check Point Certified Master Architect (CCMA)

But salary, of course, is just one of the things IT professionals should contemplate. Lochner explains that there are certain questions he would ask IT professionals who come to him for advice on what security certifications to go after.

“Some of the questions that I might ask would be, ‘Are you looking for a broad basis of knowledge? What foundation are you building on right now?’” he says. “For example, if you wanted a broad basis you might start off by looking at the CISSP. But there’s also ‘Are you doing this so that you can apply to a new job or
are you doing this so that you can move laterally or perhaps vertically up within you own organization?’”

After answering these types of questions, IT professionals would do well to find mentors who are already in roles that they themselves would eventually like to end up in, says Lochner, who has been providing consulting services in security domains for over a decade.

If after careful consideration IT professionals decide to start off with the CISSP, which is designed to provide a broad overview of the “security landscape,” they will end up with skills that are attractive in the increasingly competitive job market, notes Prof. Davis.

“It gives employers or potential employers a level set to say, ‘Well this person at least has a really decent understanding across the entire security landscape,’” he says. The (ISC)² website, which details certification requirements, lists the following 10 security domains
covered in the CISSP curriculum:

  • Access Control
  • Telecommunications and Network Security
  • Information Security Governance and Risk Management
  • Software Development Security
  • Cryptography
  • Security Architecture and Design
  • Operations Security
  • Business Continuity and Disaster Recovery Planning
  • Legal, Regulations, Investigations and Compliance
  • Physical (Environmental) Security

While the CISSP is a “good foundation certification,” Lochner stresses that those who really want to invest in advancing their careers won’t want to stop there.

“If you’re going to be working in a particular area, it might behoove you to study a little bit more,” he explains. “CISSP is a good basis, and you can look at GIAC for some of the more specialized certifications. They have something they call…GSEC – GIAC Security Essentials.”

According to Prof. Davis, SANS certifications are good bets for those who really want to get technical in the security space; ISACA’s CISM and CISA certifications solid options for IT professionals interested in getting into auditing; and EC-Council’s Certified Ethical Hacker program is popular among those involved in pen testing.

Security certifications can definitely help IT professionals at any stage of their careers. But Buruiana from Bitdefender says that lacking security certifications isn’t necessarily a deal-breaker at the Internet security company.

“Bitdefender is an unconventional company seeking talented people with inquisitive minds, capable of taking a creative approach and finding solutions to the most common dilemmas of our industry,” says Buruiana. “Every year, we run human resources projects aimed at discovering these brilliant minds.

“As for the recruiting process, we value innovation and a passion for technology more than we do specific certifications. Certifications are, undoubtedly, an added value and an asset as far as professional credibility is concerned. They are key to the ’rounded know-how’ concept, but they do not count as an exclusive
criterion with us.”

That said, the company’s employees periodically take part in certification sessions adapted to the company’s ongoing business process, says Buruiana. The sessions focus on domains like project management, software development, testing and support services.

 

This post was originally written by Ian Palmer, a contributor to InfoSec Resources. InfoSec Institute is the best source for high quality information security training.

 

Share