Developing an IR Process and Team

In our world today, we have an abundance of many things, among which are –unexpected events. Falling meteorites, terrorist attacks, hacktivist demonstrations, blackouts, tsunamis…. well, you get the point.Now, although the majority of events I just mentioned probably fall into a Disaster Recovery category, they are nonetheless events that greatly impact our personal lives and disrupt the normal ebb and flow of the daily routine.On the professional side of life, there are also incidents that,although classified on a lower scale than a disaster, still create much disruption and depending on how they are handled, can have a long-lasting impact to the flow of business. The purpose of this article is to discuss some suggested methods of how to go about building an incident response team and related procedures that will enable this group to respond to these events expeditiously.

TERMINOLOGY DEFINED

Before we start to discuss the mechanics behind building this elite group of technical emergency responders, let’s understand what we’re up against. First of all, let’s get our terminology straight. What exactly am I referring to when I use the term – “event” and “incident“? To give this article some context, consider the following definitions courtesy of Merriam Webster…

  • An incident is defined as “an occurrence […] that is a separate unit of experience”.
  • An event can be defined as “something that happens: an occurrence” or “a noteworthy happening”.

Let’s break this down;if we use the example of a small electrical fire in the basement of a building, this can be categorized as an individual “incident” or as a “separate unit of experience”. Now, if this incident is not handled properly, it can escalate and possibly grow to become a fire so large that it consumes the entire building. The incineration of the building can be categorized as an “event“, which is sort of an umbrella term that groups causes and effects for the entire disaster or “noteworthy happening” into one category.

Applying this understanding to the enterprise, items such as a data breaches, hacking attempts, critical server crashes, website defacement or social engineering attempts can be classified as individual “incidents”. This is because they may affect business or the corporate reputation but may not completely halt the business flow of the company. If not addressed properly, these incidents,although small,could escalate and succeed in completely halting the business,resulting in a disaster or large scale “event“. Hopefully, this explanation clarifies the difference between events and incidents as this understanding will determine how each occurrence is handled. This now brings us to our next section…

PLANNING AN INCIDENT RESPONSE PROCESS

This step can seem daunting if you’ve never been involved with Incident Response or you’re trying to decide where a process like this might fit in to your particular environment. How can we go about organizing all the related business groups, technical areas and how can we find out if we’re missing anything?The good news is that in the majority of cases, there is already some type of set process that is followed whenever incidents occur. Some problems that come up, however, could be that the process may not be documented and since it’s an informal process, there is a great chance that core response components are missing or have been overlooked. The benefit to identifying any existing process that your organization may have is that it is much easier to train employees using a foundation with which they are already accustomed to. It may also be much easier to gain upper management’s support and buy-in for a process that is actively being followed albeit – informally.This support is necessary because management’s support will be needed for any funding that is required and for the allocation of time for the individuals that will be forming part of the official team. Without this support, it’s possible that your project will never get off its feet or after all the hard work,the process could be scrapped or drastically changed and then it’s back to the proverbial drawing board. This can beextremely frustrating so be sure to do your homework, identify any area that may already be built and if appropriate, incorporate this into your draft IR process.This way you’ll have a deep understanding of how the process should flow when having discussions with upper management and be able to defend any modifications, enhancements or complete overhauls.

Keep in mind that when speaking with management, your initial draft is just that – a draft. Be prepared to have a detailed conversation so you can understand what their expectations are and that you properly define what your incident process is providing. It’s possible that in these initial conversations you will identify areas that need to be modified or added.If this step is not accomplished correctly,it’s possible that the functions of your future IR team will not be understood or properly recognized.This could result in your process not being properly advertised to the enterprise, in which case it simply becomes just another “informal process”. Be sure to gain managements approval, communicate and advertise your new structure so that when an incident does occur, your new framework will be used.This will eliminate any overlap and ensure that the authority of the members of your future IR team remains fully recognized.

Some other questions that you may ponder along the way:

  • How far will IR processes be able to reach?
  • Who will make up the IR Team’s client base?

The first question relating to the reach of the IR process speaks to cases where critical services and applications are provided by external third parties. In these cases, you will have to decide on how far the IR process will flow and if a “hand-off” needs to occur. This needs to be explored at length since this will make your resolution process dependent on the efforts of an outside entity.

Questions like these are highly important because in the case of many enterprise environments, there are multiple areas that are critical to business operations. This brings us to the second question regarding the IR client base. This refers to subsidiaries or operating companies that, although separate, may fall under the auspices of the parent organization. You need to understand the relationship to these companies and if they provide critical applications, services and other related business functions. More than likely, these entities will also have to fall under the scope of your IR process and it will be necessary to identify key stakeholders at those locations to support your IR. This begs the question… who should form part of the Incident Response team?

INCIDENT RESPONSE ROLES AND RESPONSIBILITIES

Depending on what you read, you may find different titles and roles for Incident Response. The following listing is an outline of some roles and responsibilities that I used when building an IR plan at a past employer. Each environment is unique, so you will need to research your own requirements and then tailor a plan that meets your needs. Generally, the types of roles that should exist within an IR function are:

Incident Response Officer – This individual is the Incident Response champion that has ultimate accountability for the actions of the IR team and IR function. This person should be an executive level employee such as a CISO or other such corporate representatives. It would be very beneficial if this individual has direct reporting access to the CEO and is a peer of other C-level executives.

Incident Response Manager – This person is the individual that leads the efforts of the IR team and coordinates activities between all of its respective groups. Normally, this person would receive initial IR alerts and be responsible for activating the IR team and managing all parts of the IR process, from discovery, assessment, remediation and finally resolution. This individual reports to the Incident Response Officer.

Incident Response Assessment Team – This group of individuals is composed of the different areas serviced by the IR team. This allows expertise from every critical discipline to weigh in on classifications and severity decisions once an incident has been identified. It is very beneficial to have representatives from IT, Security, Application Support and other business areas. In the event of an incident, the IR Manager would gather details of the incident from the affected site, begin tracking and documentation (possibly through an internal ticket management system) and then activate the Assessment Team. This group would then discuss the details of the incident and based on their expertise and knowledge of the business, would then be able to assign an initial severity. This team reports to the IR Manager.

Remote Incident Response Coordinator – This role should be assigned to qualified and capable individuals that are located in other geographic areas. These individuals ultimately report to the Incident Response Manager but in their geographic region, they are recognized as IR leaders. This will allow these assistants to manage the efforts of local custodians during an incident. This configuration is very useful, especially for organizations that have offices in multiple time zones. If an IR Manager is located in the United States but an incident occurs in a Malaysian branch, it will be helpful to have a local security leader that is able to direct efforts and provide status updates to the Incident Manager. This way, regardless of the time zone the correct actions will be invoked promptly.

Incident Response Custodians – These individuals are the technical experts and application support representatives that would be called upon to assist in the remediation and resolution of a given incident. They report to the Incident Response Manager or to the Remote IR Coordinator(s) depending on their location(s).

Once you’ve been able to identify the proper stakeholders that will form your team, you will have to provide an action framework they’ll be able to use when carrying out their responsibilities. Think of this “action framework” as a set of training wheels that will guide your IR team. What does this mean? Let’s move on to the next section to discuss this…

INCIDENT RESPONSE PROCESS FLOW

A part of outlining this framework involves the identification of IR Severity Levels. These levels will help your team understand the severity of an event and will govern the team’s response. Some suggestions for these levels are the following:

SEVERITY LEVEL LEVEL OF BUSINESS IMPACT RESOLUTION EFFORT REQUIRED
SEVERITY 1 LOW LOW EFFORT
SEVERITY 2 MODERATE MODERATE EFFORT
SEVERITY 3 HIGH EXTENSIVE, ONGOING EFFORT
SEVERITY 4 SEVERE DISASTER RECOVERY INVOKED

Earlier in this article, I mentioned the benefit of identifying any existing informal process that your company may already be following. If so, it will now be necessary for you to step through that process mentally, keeping in mind your identified severity levels so that you can start to document each step of the process. You will undoubtedly start to remove irrelevant portions of the informal process but may opt to keep certain items in place. (For example, certain notification procedures may still be useful and you may continue to use these in your new IR process to alert members of your team). If you don’t have a starting point like this and you’re starting from scratch, then perhaps the following suggestions can provide some direction.

Start to create a documented action script that will outline your response steps so your IR Manager can follow them consistently. Your script should show steps similar to the following:

STEP # ACTION
1 Incident announced
2 IR Manager alerted
3 IR Manager begins information gathering from affected site
4 IR Manager begins tracking and documentation of incident
5 IR Manager invokes Assessment Team
(Details of call bridge or other communication mechanism)
6 Assessment Team reviews details and decides on Severity Level of incident.
7 IF SEV 1 = PROCEED TO STEP #11.0
8 IF SEV 2 = PROCEED TO STEP #12.0
9 IF SEV 3 = PROCEED TO STEP #13.0
10 IF SEV 4 = PROCEED TO STEP #14.0
FOR SEVERITY LEVEL 1 – Proceed with following sequence
11.0 Determine attack vectors being used by threat
11.1 Determine network locations that are impacted
11.2 Identify areas that fall under “Parent Organization”
11.3 Identify systems or applications that are impacted
FOR SEVERITY LEVEL 2 – Proceed with following sequence
12.0 Determine attack vectors being used by threat
12.1 Alert Incident Officer to Severity 2 threat

This of course is an extremely high level example, but as you can see, it is possible to flesh out the majority of the process with specific action items for each severity level. Be sure to thoroughly research your unique environment to develop a process that fits your needs. You may have to add custom steps to cover incidents that span multiple countries and subsidiaries. Once you’ve created your process.you may want to consider developing small wallet size scripts for the members of your Assessment Team and other key players on which you will need to depend to make this run efficiently. In this way, each member will have necessary information on hand that will allow them to respond as expected.

This article just scratches the surface of the work that is required to build a full IR process but hopefully this has given you some direction and additional areas to explore when planning your next IR project!

References:

Best Email Retention Policy Practices

Email retention policies are no longer just about conserving space on your Exchange server. Today you must take into account how your email retention controls increase or decrease risk to your company.

Pros and Cons of Short and Long Email Retention Policies

Generally speaking, longer email retention policies increase the risk that a security vulnerability or unauthorized user could expose your company’s secrets or embarrassing material. Long policies also increase your company’s exposure to legal examination that focuses on conversations and decisions captured in emails (this is also known as the “paper trail” in an “eDiscovery” process).

Shorter email retention policies help avoid these problems and are cheaper to implement, but they have their own significant disadvantages as well. First, short policies tend to annoy long-term employees and often executives, who rely on old email chains to recollect past decisions and the context in which they were made. Second, short policies may violate federal, state, local and/or industry regulations that require certain types of information to be retained for a minimum period of time – often years!

Best Practices to Develop Your Email Retention Policy

Obviously, you must balance these factors and others when you develop your own email retention policy, but there are a number of best practices that can help you draft and get support for a solid email retention policy. Today, I’ll be covering five practices often used by effective professionals and managers.

Email Retention Policy Best Practice #1: Start With Regulatory Minimums

Your email retention policy should begin by listing the various regulations your company is subject to and the relevant document retention requirements involved with each regulation.

Every industry is regulated differently, and businesses are often subject to different tax, liability and privacy regulations depending on the locations in which they do business. However, some common recommended retention periods include:

If a retention period is not known for a particular type of data, seven years (the minimum IRS recommendation) is often used as a safe common denominator.

Email Retention Policy Best Practice #2: Segment As Necessary To Avoid Keeping Everything For the Legal Maximum

As you can see from the list above, recommended retention periods vary widely even within highly regulated industries. With that in mind, it often pays to segment different types or uses of email into different retention periods to avoid subjecting your entire online email store to the maximum email retention period.

Segmentation by type of content looks something like this:

  • Invoices – 7 years
  • Sales Records – 5 years
  • Petty Cash Vouchers – 3 years

Segmentation by type of use looks something like this:

  • Administrative correspondence (e.g., human resources) – 5 years
  • Fiscal correspondence (e.g., revenue and expenses) – 4 years
  • General correspondence (e.g., customer interactions, internal threads) – 3 years
  • Ephemeral correspondence (e.g., everything else business-related) – 1 year
  • Spam – not retained

Mixed segmentation is also often common and looks something like this:

  • Human resources – 7 years
  • Transaction receipts – 3 years
  • Executive email – 2 years
  • Spam – not retained
  • Everything else (e.g., “default retention policy”) – 1 year

The rules and technologies you use to inspect, classify and segment can vary from simple sender- and subject-matching to sophisticated engines that intuit intent and history. (Unfortunately, space does not permit us to examine these technologies here, but trust me – they exist!)

Email Retention Policy Best Practice #3:

Draft a Real Policy…But Don’t Include What You Won’t Enforce

A written policy, approved by legal counsel and senior management, will give you the requirements and authority to implement all the IT, security and process controls you need. If you haven’t seen a full retention policy yet, please take the time to search the web for a few, such as this template from the University of Wisconsin(Go Badgers! Sorry…proud alum.)

Note that many “email retention policy” documents (including the UW template) cover much more than email! In general, this is OK because a “document policy” gives you what you need to implement an “email policy”, but you’ll want to make a point of talking the “document vs. email” terminology through with your legal team before you finalize your policy.

A good written policy (again, including the UW template) always contains these sections:

  • Purpose: why does this policy exist? If specific regulations informed the creation of this policy, they should all be listed here.
  • Retention time, by segment: how long various types of content or content used in a particular manner must be retained (the UW template segments by type of content). Durations are often listed in years, may include triggers (e.g., “after X”) and may even be “Permanent”.
  • Differences between “paper” and “electronic” documents: ideally, none.
  • What constitutes “destruction”: usually shredding and deleting, often “secure deletion” (e.g., with overwriting) and degaussing of media where applicable.
  • Pause destruction if legal action imminent: your legal department will normally add this for you, but you can show off your legal bona fides by including a clause instructing IT to pause automatic email deletion if the company becomes the subject of a claim or lawsuit (this is also called a “litigation hold”).
  • Who is responsible: typically everyone who touches the documents, often with special roles for certain titles (e.g., “Chief Archivist”) or groups (e.g., “legal counsel”).

Good written policies omit areas that you won’t or can’t support, especially types of segmentation you will not be able to determine or support. Good policies also refer to capabilities and requirements (e.g., offsite archival) rather than specific technologies and processes (e.g., DAT with daily courier shipments).

Email Retention Policy Best Practice #4: Price Preferred Solution and Alternatives By Duration and Segment

Let’s pretend that you have a policy like the following:

  • All email: retain on fast storage for 18 months
  • Purchase transaction emails : also archive to offline storage until 5 years have passed
  • Legal emails: also archive to offline storage until 7 years have passed
  • “Fast storage” = accessible through end user’s email clients through “folders”; normally only individual users can access, but administrators and archival specialists (e.g., the legal team) can access too
  • “Offline storage” = accessible through internal utility and search; only administrators and archival specialists (e.g., the legal team) can access

To price an appropriate solution, you would restate your requirements based on number of users, expected volume of email and expected rate of growth. For example, in a 500-person company where each user averaged 1MB and 100 messages of email a day, there were 5000 additional transaction emails (total 50MB) a day and 100 additional legal emails (total 20MB) a day, and volumes were expected to increase 10% per year, here’s how we might estimate minimum requirements for the next seven years:

  • All email: 18 months x 1MB/day-person x 30 days/month x 500 people = 270GB x 1.8 (about 10% increase in 6 years) = 486GB email server storage
  • Purchase transaction emails: 5 years x 12 months/year x 30 days/month x 50MB/day = 90GB x 1.8 = 162GB email archive storage
  • Legal emails: 7 years x 12 months/year x 30 days/month x 20MB/day = 50GB x 1.8 = 91GB email archive storage
  • TOTAL: 486GB server + 253GB archive

However, after you’ve priced out your preferred solution, you still need to be prepared to handle alternatives that may result from discussions with legal or your executive team. For example, if the executive team pushes your 18 month blanket retention to 3 years and the legal team “requires” that its emails are always in near-term email storage, how would that change your requirements and pricing?

  • All email: 36 months x 1MB/day-person x 30 days/month x 500 people = 540GB x 1.8 (about 10% increase in 6 years) = 972GB email server storage
  • Purchase transaction emails: 5 years x 12 months/year x 30 days/month x 50MB/day = 90GB x 1.8 = 162GB email archive storage
  • Legal emails: 7 years x 12 months/year x 30 days/month x 20MB/day = 50GB x 1.8 = 91GB email server storage
  • TOTAL: 1063GB server + 192GB archive (e.g., DOUBLE your realtime storage!)

Long story short, if you can figure out your own rule-of-thumb per-GB price for the various types of storage necessary to support your archiving scheme (as well as licensing considerations, including any per-message or per-type-of-message rules) you’ll be better prepared for “horse trading” later in the approval process.

Email Retention Policy Best Practice #5: Once You Draft Your Policy, Include Legal Before the Executives

If you’re still reading this, chances are good that you (like me) are a senior IT or security professional, or are perhaps even a manager. If you’ve drafted other IT policies, such as an “acceptable use” policy, your first instinct might be to keep your legal team out of the process until your new policy has snowballed down from your IT-based executive sponsor. This is almost always a mistake.

The main reason legal should be included as soon as you have a draft is that two of the best practices listed above (regulatory minimums and viability of segmentation) are really legal’s call – not yours! You will have saved legal a lot of legwork by researching the main drivers of email retention policy and the technical controls you can use to enforce the policy, but at the end of the day legal will be called upon to defend the company’s decision to keep or toss critical information, so legal will need to assign the final values to your policy limits.

A second reason to include legal before your executives is that you want to present a unified front (as IT and legal) on your maximum retention limits. Once you get into negotiations with your executive team, legal will likely be pushing for even shorter limits (because it limits the threat of hostile eDiscovery) and the executives will be pushing for even longer limits (because email is their old document storage). This puts you (as IT) in the rational middle and gives your policy a good chance of making it through the negotiations relatively unscathed.

The final reason you want to include legal early is that their calls may force you to reprice the options you laid out before you talked to them, and may cause you to take some options off the table. If you reversed the process and got executives to sign off on a solution that got vetoed by legal and sent back to the executive team for a second round of “ask,” I think you know that no one would be happy.

Conclusion: Your Email Retention Policy Will Be Your Own

Given all the different constraints your organization faces and all the different ways your interactions with your legal and executive team could go, it would be impossible for me to predict what any company’s email retention policy would be. However, if you follow these five best practices when you develop your own, you stand a better-than-average chance of drafting an email retention policy that’s sensible, enforceable, and loved by legal and top management alike.

Crafting a Pen Testing Report

You close the lid of your laptop; it’s been a productive couple of days. There are a few things that could be tightened up, but overall the place isn’t doing a bad job. Exchange pleasantries with the people who have begrudgingly given up time to escort you, hand in your visitors badge and head for the door. Just as you feel the chill of outside against your skin, you hear a muffed voice in the background.

“Hey, sorry, I forgot to ask, when can we expect the report?”

Sound familiar?

Ugh, the report. Penetration testing’s least favorite cousin, but ultimately, one of the most important.

There are thousands of books written about information security and pen testing. There are hundreds of hours of training courses that cover the penetration testing process. However, I would happily wager that less than ten percent of all the material out there is dedicated to reporting. This, when you consider that you probably spend 40-50% of the total duration of a pen test engagement actually writing the report, is quite alarming.

It’s not surprising though, teaching someone how to write a report just isn’t as sexy as describing how to craft the perfect buffer overflow, or pivot round a network using Metasploit. I totally get that, even learning how the TCP packet structure works for the nineteenth time sounds like a more interesting topic.

A common occurrence amongst many pen testers. Not allowing enough time to produce a decent report.

No matter how technically able we are as security testers, it is often a challenge to explain a deeply technical issue to someone who may not have the same level of technical skill. We are often guilty of making assumptions that everyone who works in IT has read the same books, or has the same interests as us. Learning to explain pen test findings in a clear and concise way is an art form, and one that every security professional should take the time to master. The benefits of doing so are great. You’ll develop a better relationship with your clients, who will want to make use of your services over and over again. You’ll also save time and money, trust me. I once drove a 350 mile round trip to go and explain the contents of a penetration test report to a client. I turned up, read some pages of the report aloud with added explanations and then left fifteen minutes later. Had I taken a tiny bit more time clarifying certain issues in my report, I would have saved an entire day of my time and a whole tank of gas.

Diluted: “SSH version one should be disabled as it contains high severity vulnerabilities that may allow an attacker already on the network to intercept and decrypt communications, although the risk of an attacker gaining access to the network is very low, so this reduces the severity.”

Clarified: “It is advisable to disable SSH version one on these devices, failure to do so could allow an attacker with local network access to decrypt and intercept communications.”

Why is a penetration test report so important?

Never forget, penetration testing is a scientific process, and like all scientific processes it should be repeatable by an independent party. If a client disagrees with the findings of a test, they have every right to ask for a second opinion from another tester. If your report doesn’t detail how you arrived at a conclusion, the second tester will have no idea how to repeat the steps you took to get there. This could lead to them offering a different conclusion, making you look a bit silly and worse still, leaving a potential vulnerability exposed to the world.

Bad: “Using a port scanner I detected an open TCP port”.

    Better: “Using Nmap 5.50, a port scanner, I detected an open TCP port using the SYN scanning technique on a selected range of ports. The command line was: nmap –sS –p 7000-8000.”

The report is the tangible output of the testing process, and the only real evidence that a test actually took place. Chances are, senior management (who likely approved funding for the test) weren’t around when the testers came into the office, and even if they were, they probably didn’t pay a great deal of attention. So to them, the report is the only thing they have to go on when justifying the expense of the test. Having a penetration test performed isn’t like any other type of contract work. Once the contract is done there is no new system implemented, or no new pieces of code added to an application. Without the report, it’s very hard to explain to someone what exactly they’ve just paid for.

Who is the report for?

While the exact audience of the report will vary depending on the organization, it’s safe to assume that it will be viewed by at least three types of people.

Senior management, IT management and IT technical staff will all likely see the report, or at least part of it. All of these groups will want to get different snippets of information. Senior management simply doesn’t care, or doesn’t understand what it means if a payment server encrypts connections using SSL version two. All they want to know is the answer to one simple question “are we secure – yay or nay?”

IT management will be interested in the overall security of the organization, but will also want to make sure that their particular departments are not the cause of any major issues discovered during testing. I recall giving one particularly damming report to three IT managers. Upon reading it two of them turned very pale, while the third smiled and said “great, no database security issues then”.

IT staff will be the people responsible for fixing any issues found during testing. They will want to know three things. The name of the system affected, how serious the vulnerability is and how to fix it. They will also want this information presented to them in a way that is clear and organized. I find the best way is to group this information by asset and severity. So for example, “Server A” is vulnerable to “Vulnerability X, Y and Z. Vulnerability Y is the most critical”. This gives IT staff half a chance of working through the list of issues in a reasonable timeframe. There is nothing worse than having to work your way backwards and forwards through pages of report output to try and keep track of vulnerabilities and whether or not they’ve been looked at.

Of course, you could always ask your client how they would like vulnerabilities grouped. After all, the test is really for their benefit and they are the people paying! Some clients prefer to have a page detailing each vulnerability, with affected assets listed under the vulnerability title. This is useful in situations where separate teams may all have responsibilities for different areas of a single asset. For example, the systems team runs the webserver, but the development team writes the code for the application hosted on it.

Although I’ve mentioned the three most common audiences for pen test reports, this isn’t an exhaustive list. Once the report is handed over to the client, it’s up to them what they do with it. It may end up being presented to auditors, as evidence that certain controls are working. It could be presented to potential customers by the sales team. “Anyone can say their product is secure, but can they prove it? We can, look here is a pen test report”.

Reports might even end up getting shared with the whole organization. It sounds crazy, but it happens. I once performed a social engineering test, the results of which were less than ideal for the client. The enraged CEO shared the report with the whole organization, as a way of raising awareness of social engineering attacks. This was made more interesting, when I visited that same company a few weeks later to deliver some security awareness training. During my introduction, I explained that my company did security testing and was responsible for the social engineering test a few weeks back. This was greeted with angry stares and snide comments about how I’d gotten them all into trouble. My response was, as always, “better to give me your passwords than a genuine bad guy”.

What should the report contain?

Sometimes you’ll get lucky and the client will spell out exactly what they want to see in the report during the initial planning phase. This includes both content and layout. I’ve seen this happen to extreme levels of detail, such as what font size and line spacing settings should be used. However, more often than not, the client won’t know what they want and it’ll be your job to tell them.

So without further ado, here are some highly recommended sections to include in pen test reports.

  • A Cover Sheet. This may seem obvious, but the details that should be included on the cover sheet can be less obvious. The name and logo of the testing company, as well as the name of the client should feature prominently. Any title given to the test such as “internal network scan” or “DMZ test” should also be up there, to avoid confusion when performing several tests for the same client. The date the test was performed should appear. If you perform the same tests on a quarterly basis this is very important, so that the client or the client’s auditor can tell whether or not their security posture is improving or getting worse over time. The cover sheet should also contain the document’s classification. Agree this with the client prior to testing; ask them how they want the document protectively marked. A penetration test report is a commercially sensitive document and both you and the client will want to handle it as such.
  • The Executive Summary. I’ve seen some that have gone on for three or four pages and read more like a Jane Austen novel than an abbreviated version of the report’s juicy bits. This needs to be less than a page. Don’t mention any specific tools, technologies or techniques used, they simply don’t care. All they need to know is what you did, “we performed a penetration test of servers belonging to X application”, and what happened, “we found some security problems in one of the payment servers”. What needs to happen next and why “you should tell someone to fix these problems and get us in to re-test the payment server, if you don’t you won’t be PCI compliant and you may get a fine”. The last line of the executive summary should always be a conclusion that explicitly spells out whether or not the systems tested are secure or insecure, “overall we have found this system to be insecure”. It could even be just a single word.

A bad way to end an executive summary: “In conclusion, we have found some areas where security policy is working well, but other areas where it isn’t being followed at all. This leads to some risk, but not a critical amount of risk.”

A better way: “In conclusion, we have identified areas where security policy is not being adhered to, this introduces a risk to the organization and therefore we must declare the system as insecure.”

  • Summary of Vulnerabilities. Group the vulnerabilities on a single page so that at a glance an IT manager can tell how much work needs to be done. You could use fancy graphics like tables or charts to make it clearer – but don’t overdo it. Vulnerabilities can be grouped by category (e.g. software issue, network device configuration, password policy), severity or CVSS score –the possibilities are endless. Just find something that works well and is easy to understand.

  • Test Team Details. It is important to record the name of every tester involved in the testing process. This is not just so you and your colleagues can be hunted down should you break something. It’s a common courtesy to let a client know who has been on their network and provide a point of contact to discuss the report with. Some clients and testing companies also like to rotate the testers assigned to a particular set of tests. It’s always nice to cast a different set of eyes over a system. If you are performing a test for a UK government department under the CHECK scheme, including the name of the team leader and any team members is a mandatory requirement.
  • List of the Tools Used. Include versions and a brief description of the function. This goes back to repeatability. If anyone is going to accurately reproduce your test, they will need to know exactly which tools you used.

  • A copy of the original scope of work. This will have been agreed in advance, but reprinting here for reference purposes is useful.
  • The main body of the report. This is what it’s all about. The main body of the report should include details of all detected vulnerabilities, how you detected the vulnerability, clear technical expiations of how the vulnerability could be exploited, and the likelihood of exploitation. Whatever you do, make sure you write your own explanations, I’ve lost count of the number of reports that I’ve seen that are simply copy and paste jobs from vulnerability scanner output. It makes my skin crawl; it’s unprofessional, often unclear and irrelevant. Detailed remediation advice should also be included. Nothing is more annoying to the person charged with fixing a problem than receiving flakey remediation advice. For example, “Disable SSL version 2 support” does not constitute remediation advice. Explain the exact steps required to disable SSL version 2 support on the platform in question. As interesting as reading how to disable SSL version 2 on Apache is, it’s not very useful if all your servers are running Microsoft IIS. Back up findings with links to references such as vendor security bulletins and CVE’s.

Getting the level of detail in a report right is a tricky business. I once wrote a report that was described as “overwhelming” because it was simply too detailed, so on my next test I wrote a less detailed report. This was subsequently rejected because it “lacked detail”. Talk about moving the goalposts. The best thing to do is spend time with the client, learn exactly who the audience will be and what they want to get out of the report.

Final delivery.

When a pilot lands an airliner, their job isn’t over. They still have to navigate the myriad of taxiways and park at the gate safely. The same is true of you and your pen test reports, just because its finished doesn’t mean you can switch off entirely. You still have to get the report out to the client, and you have to do so securely. Electronic distribution using public key cryptography is probably the best option, but not always possible. If symmetric encryption is to be used, a strong key should be used and must be transmitted out of band. Under no circumstances should a report be transmitted unencrypted. It all sounds like common sense, but all too often people fall down at the final hurdle.

It’s What’s on the Inside that Counts

The last time I checked, the majority of networking and security professionals were still human.

We all know that the problem with humans is that they sometimes exhibit certain behaviors that can lead to trouble – if that wasn’t the case we’d probably all be out of a job! One such behavior is obsession.

Obsession can be defined as an idea or thought that continually preoccupies or intrudes on a person’s mind. I’ve worked with a number of clients who have had an obsession that may, as bizarrely as it seems, have had a negative impact on their information security program.

The obsession I speak of is the thought of someone “breaking in” to their network from the outside.

You’re probably thinking to yourself, how on earth can being obsessed with protecting your network from external threats have a negative impact on your security? If anything it’s probably the only reason you’d want a penetration test in the first place! I’ll admit, you’re correct about that, but allow me to explain.

Every organization has a finite security budget. How they use that budget is up to them, and this is where the aforementioned obsession can play its part. If I’m a network administrator with a limited security budget and all I think about is keeping people out of my network, my shopping list will likely consist of edge firewalls, web-application firewalls, IDS/IPS and a sprinkling of penetration testing.

If I’m a pen tester working on behalf of that network administrator I’ll scan the network and see a limited number of open ports thanks to the firewall, trigger the IPS, have my SQL injection attempts dropped by the WAF and generally won’t be able to get very far. Then my time will be up, I’ll write a nice report about how secure the network is and move on. Six or twelve months later, I’ll do exactly the same test, find exactly the same things and move on again. This is the problem. It might not sound like a problem, but trust me, it is. Once we’ve gotten to this point, we’ve lost sight of the reason for doing the pen test in the first place.

The test is designed to be a simulation of an attack conducted by a malicious hacker with eyes only for the client. If a hacker is unable to break into the network from the outside, chances are they won’t wait around for a few months and try exactly the same approach all over again. Malicious hackers are some of the most creative people on the planet. If we really want to do as they do, we need to give our testing a creativity injection. It’s our responsibility as security professionals to do this, and encourage our clients to let us do it.

Here’s the thing, because both pen testers and clients have obsessed over how hackers breaking into stuff for so long, we’ve actually gotten a lot better at stopping them from doing so. That’s not to say that there will never be a stray firewall rule that gives away a little too much skin, or a hastily written piece of code that doesn’t validate input properly, but generally speaking “breaking in” is no longer the path of least resistance at many organizations – and malicious hackers know it. Instead “breaking out” of a network is the new route of choice.

While everyone has been busy fortifying defenses on the way in to the network, traffic on the way out is seldom subject to such scrutiny – making it a very attractive proposition to an attacker. Of course, the attacker still has to get themselves into position behind the firewall to exploit this – but how? And how can we simulate it in a penetration test?

What the Pen Tester sees

The Whole Picture

On-Site Testing

There is no surer way of getting on the other side of the firewall than to head to your clients office and plugging directly into their network. This isn’t a new idea by any means, but it’s something that’s regularly overlooked in favor of external or remote testing. The main reason for this of course is the cost. Putting up a tester for a few nights in a hotel and paying travel expenses can put additional strain on the security budget. However, doing so is a hugely valuable exercise for the client. I’ve tested networks from the outside that have shown little room for enumeration, let alone exploitation. But once I headed on-site and came at those networks from a different angle, the angle no one ever thinks of, I had trouble believing they were the same entity.

To give an example, I recall doing an on-site test for a client who had just passed an external test with flying colors. Originally they had only wanted the external test, which was conducted against a handful of IPs. I managed to convince them that in their case, the internal test would provide additional value. I arrived at the office about an hour and a half early, I sat out in the parking lot waiting to go in. I fired up my laptop and noticed a wireless network secured with WEP, the SSID was also the name of the client. You can probably guess what happened next. Four minutes later I had access to the network, and was able to compromise a domain controller via a flaw in some installed backup software. All of this without leaving the car. Eventually, my point of contact arrived and said, “So are you ready to begin, or do you need me to answer some questions first?” The look on his face when I told him that I’d actually already finished was one that I’ll never forget. Just think, had I only performed the external test, I would have been denied that pleasure. Oh, and of course I would have never picked up on the very unsecure wireless network, which is kind of important too.

This is just one example of the kind of thing an internal test can uncover that wouldn’t have even been considered during an external test. Why would an attacker spend several hours scanning a network range when they could just park outside and connect straight to the network?

One of my favorite on-site activities is pretending I’m someone with employee level access gone rogue. Get on the client’s standard build machine with regular user privileges and see how far you can get on the network. Can you install software? Can you load a virtual machine? Can you get straight to the internet, rather than being routed through a proxy? If you can, there are a million and one attack opportunities at your fingertips.

The majority of clients I’ve performed this type of test for hugely overestimated their internal security. It’s well documented that the greatest threat comes from the inside, either on purpose or by accident. But of course, everyone is too busy concentrating on the outside to worry about what’s happening right in front of them.

Good – Networks should be just as hard to break out of, as they are to break in to.

Fortunately, some clients are required to have this type of testing, especially those in government circles. In addition, several IT security auditing standards require a review of internal networks. The depth of these reviews is sometimes questionable though. Auditors aren’t always technical people, and often the review will be conducted against diagrams and documents of how the system is supposed to work, rather than how it actually works. These are certainly useful exercises, but at the end of the day a certificate with a pretty logo hanging from your office wall won’t save you when bad things happen.

Remote Workers

Having a remote workforce can be a wonderful thing. You can save a bunch of money by not having to maintain a giant office and the associated IT infrastructure. The downside of this is that in many organizations, the priority is getting people connected and working, rather than properly enforcing security policy. The fact is that if you allow someone to connect remotely into the heart of your network with a machine that you do not have total control over, your network is about as secure as the internet. You are in effect extending your internal network out past the firewall to the unknown. I’ve seen both sides of the spectrum, from an organization that would only allow people to connect in using routers and machines that they configured and installed, to an organization that provided a link to VPN client and said “get on with it”.

I worked with one such client who was starting to rely on remote workers more and more, and had recognized that this could introduce a security problem. They arranged for me to visit the homes of a handful of employees and see if I could somehow gain access to the network’s internal resources. The first employee I visited used his own desktop PC to connect to the network. He had been issued a company laptop, but preferred the big screen, keyboard and mouse that were afforded to him by his desktop. The machine had no antivirus software installed, no client firewall running and no disk encryption. This was apparently because all of these things slowed it down too much. Oh, but it did have a peer-to-peer file sharing application installed. No prizes for spotting the security risks here.

In the second home I visited, I was pleased to see the employee using her company issued XP laptop. Unfortunately she was using it on her unsecured wireless network. To demonstrate why this was a problem, I joined my testing laptop to the network, fired up a Metasploit session and hit the IP with my old favorite, the MS08-067 NetAPI32.dll exploit module. Sure enough, I got a shell, and was able to pivot my way into the remote corporate network. It was at this point that I discovered the VPN terminated in a subnet with unrestricted access to the internal server subnet. When I pointed out to the client that there really should be some sort of segregation between these two areas, I was told that there was. “We use VLAN’s for segregation”, came the response. I’m sure that everyone reading this will know that segregation using VLAN’s, at least from a security point of view, is about as useful as segregating a lion from a Chihuahua with a piece of rice paper. Ineffective, unreliable and will result in an unhappy ending.

Bad – The VPN appliance is located in the core of the network.

Social Engineering

We all know that this particular activity is increasing in popularity amongst our adversaries, so why don’t we do it more often as part of our testing? Well, simply put, a lot of the time this comes down to politics. Social engineering tests are a bit of a touchy subject at some organizations, who fear a legal backlash if they do anything to blatantly demonstrate how their own people are subject to the same flaws as the seven billion other on the planet. I’ve been in scoping meetings when as soon as the subject of social engineering has come up, I’m stared at harshly and told in no uncertain terms, “Oh, no way, that’s not what we want, don’t do that.” But why not do it? Don’t you think a malicious hacker would? You’re having a pen test right? Do you think a malicious hacker would hold off on social engineering because they haven’t gotten your permission to try it? Give me a break.

On the other hand, I’ve worked for clients who have recognized the threat of social engineering as one of the greatest to their security, and relished at the opportunity to have their employees tested. Frequently, these tests result in a greater than 80% success rate. So how are they done?

Well, they usually start off with the tester registering a domain name which is extremely similar to the client’s. Maybe with one character different, or a different TLD (“.net” instead of “.com” for example).

The tester’s next step would be to set up a website that heavily borrows CSS code from the client’s site. All it needs is a basic form with username and password fields, as well as some server side coding to email the contents of the form to the tester upon submission.

With messages like this one in an online meeting product, it’s no wonder social engineering attacks are so successful.

Finally, the tester will send out an email with some half-baked story about a new system being installed, or special offers for the employee “if you click this link and login”. Sit back and wait for the responses to come in. Follow these basic steps and within a few minutes, you’ve got a username, password and employee level access. Now all you have to do is find a way to use that to break out of the network, which won’t be too difficult, because everyone will be looking the other way.

Conclusion

The best penetration testers out there are those who provide the best value to the client. This doesn’t necessarily mean the cheapest or quickest. Instead it’s those who make the most effective use of their relatively short window of time, and any other limitations they face to do the job right. Never forget what that job is, and why you are doing it. Sometimes we have to put our generic testing methodologies aside and deliver a truly bespoke product. After all, there is nothing more bespoke than a targeted hacking attack, which can come from any direction. Even from the inside.