Tips & Tricks

REVIEW – “The Florentine Deception”, Carey Nachenberg

BKFLODEC.RVW   20150609

“The Florentine Deception”, Carey Nachenberg, 2015, 978-1-5040-0924-9,
U$13.49/C$18.91
%A   Carey Nachenberg http://florentinedeception.com
%C   345 Hudson Street, New York, NY   10014
%D   2015
%G   978-1-5040-0924-9 150400924X
%I   Open Road Distribution
%O   U$13.49/C$18.91 www.openroadmedia.com
%O  http://www.amazon.com/exec/obidos/ASIN/150400924X/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/150400924X/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/150400924X/robsladesin03-20
%O   Audience n+ Tech 3 Writing 2 (see revfaq.htm for explanation)
%P   321 p.
%T   “The Florentine Deception”

It gets depressing, after a while.  When you review a bunch of books on the basis of the quality of the technical information, books of fiction are disappointing.  No author seems interested in making sure that the technology is in any way realistic.  For every John Camp, who pays attention to the facts, there are a dozen Dan Browns who just make it up as they go along.  For every Toni Dwiggins, who knows what she is talking about, there are a hundred who don’t.

So, when someone like Carey Nachenberg, who actually works in malware research, decides to write a story using malicious software as a major plot device, you have to be interested.  (And besides, both Mikko Hypponen and Eugene Spafford, who know what they are talking about, say it is technically accurate.)

I will definitely grant that the overall “attack” is technically sound.  The forensics and anti-forensics makes sense.  I can even see young geeks with more dollars than sense continuing to play “Nancy Drew” in the face of mounting odds and attackers.  That a vulnerability can continue to go undetected for more than a decade would ordinarily raise a red flag, but Nachenberg’s premise is realistic (especially since I know of a vulnerability at that very company that went unfixed for seven years after they had been warned about it).  That a geek goes rock-climbing with a supermodel we can put down to poetic licence (although it may increase the licence rates).  I can’t find any flaws in the denouement.

But.  I *cannot* believe that, in this day and age, *anyone* with a background in malware research would knowingly stick a thumb/jump/flash/USB drive labelled “Florentine Controller” into his, her, or its computer.  (This really isn’t an objection: it would only take a couple of pages to have someone run up a test to make sure the thing was safe, but …)

Other than that, it’s a joy to read.  It’s a decent thriller, with some breaks to make it relaxing rather than exhausting (too much “one damn thing after another” gets tiring), good dialogue, and sympathetic characters.  The fact that you can trust the technology aids in the “willing suspension of disbelief.”

While it doesn’t make any difference to the quality of the book, I should mention that Carey is donating all author profits from sales of the book to charity:
http://florentinedeception.weebly.com/charities.html

copyright, Robert M. Slade   2015   BKFLODEC.RVW   20150609

Developing an IR Process and Team

In our world today, we have an abundance of many things, among which are –unexpected events. Falling meteorites, terrorist attacks, hacktivist demonstrations, blackouts, tsunamis…. well, you get the point.Now, although the majority of events I just mentioned probably fall into a Disaster Recovery category, they are nonetheless events that greatly impact our personal lives and disrupt the normal ebb and flow of the daily routine.On the professional side of life, there are also incidents that,although classified on a lower scale than a disaster, still create much disruption and depending on how they are handled, can have a long-lasting impact to the flow of business. The purpose of this article is to discuss some suggested methods of how to go about building an incident response team and related procedures that will enable this group to respond to these events expeditiously.

TERMINOLOGY DEFINED

Before we start to discuss the mechanics behind building this elite group of technical emergency responders, let’s understand what we’re up against. First of all, let’s get our terminology straight. What exactly am I referring to when I use the term – “event” and “incident“? To give this article some context, consider the following definitions courtesy of Merriam Webster…

  • An incident is defined as “an occurrence […] that is a separate unit of experience”.
  • An event can be defined as “something that happens: an occurrence” or “a noteworthy happening”.

Let’s break this down;if we use the example of a small electrical fire in the basement of a building, this can be categorized as an individual “incident” or as a “separate unit of experience”. Now, if this incident is not handled properly, it can escalate and possibly grow to become a fire so large that it consumes the entire building. The incineration of the building can be categorized as an “event“, which is sort of an umbrella term that groups causes and effects for the entire disaster or “noteworthy happening” into one category.

Applying this understanding to the enterprise, items such as a data breaches, hacking attempts, critical server crashes, website defacement or social engineering attempts can be classified as individual “incidents”. This is because they may affect business or the corporate reputation but may not completely halt the business flow of the company. If not addressed properly, these incidents,although small,could escalate and succeed in completely halting the business,resulting in a disaster or large scale “event“. Hopefully, this explanation clarifies the difference between events and incidents as this understanding will determine how each occurrence is handled. This now brings us to our next section…

PLANNING AN INCIDENT RESPONSE PROCESS

This step can seem daunting if you’ve never been involved with Incident Response or you’re trying to decide where a process like this might fit in to your particular environment. How can we go about organizing all the related business groups, technical areas and how can we find out if we’re missing anything?The good news is that in the majority of cases, there is already some type of set process that is followed whenever incidents occur. Some problems that come up, however, could be that the process may not be documented and since it’s an informal process, there is a great chance that core response components are missing or have been overlooked. The benefit to identifying any existing process that your organization may have is that it is much easier to train employees using a foundation with which they are already accustomed to. It may also be much easier to gain upper management’s support and buy-in for a process that is actively being followed albeit – informally.This support is necessary because management’s support will be needed for any funding that is required and for the allocation of time for the individuals that will be forming part of the official team. Without this support, it’s possible that your project will never get off its feet or after all the hard work,the process could be scrapped or drastically changed and then it’s back to the proverbial drawing board. This can beextremely frustrating so be sure to do your homework, identify any area that may already be built and if appropriate, incorporate this into your draft IR process.This way you’ll have a deep understanding of how the process should flow when having discussions with upper management and be able to defend any modifications, enhancements or complete overhauls.

Keep in mind that when speaking with management, your initial draft is just that – a draft. Be prepared to have a detailed conversation so you can understand what their expectations are and that you properly define what your incident process is providing. It’s possible that in these initial conversations you will identify areas that need to be modified or added.If this step is not accomplished correctly,it’s possible that the functions of your future IR team will not be understood or properly recognized.This could result in your process not being properly advertised to the enterprise, in which case it simply becomes just another “informal process”. Be sure to gain managements approval, communicate and advertise your new structure so that when an incident does occur, your new framework will be used.This will eliminate any overlap and ensure that the authority of the members of your future IR team remains fully recognized.

Some other questions that you may ponder along the way:

  • How far will IR processes be able to reach?
  • Who will make up the IR Team’s client base?

The first question relating to the reach of the IR process speaks to cases where critical services and applications are provided by external third parties. In these cases, you will have to decide on how far the IR process will flow and if a “hand-off” needs to occur. This needs to be explored at length since this will make your resolution process dependent on the efforts of an outside entity.

Questions like these are highly important because in the case of many enterprise environments, there are multiple areas that are critical to business operations. This brings us to the second question regarding the IR client base. This refers to subsidiaries or operating companies that, although separate, may fall under the auspices of the parent organization. You need to understand the relationship to these companies and if they provide critical applications, services and other related business functions. More than likely, these entities will also have to fall under the scope of your IR process and it will be necessary to identify key stakeholders at those locations to support your IR. This begs the question… who should form part of the Incident Response team?

INCIDENT RESPONSE ROLES AND RESPONSIBILITIES

Depending on what you read, you may find different titles and roles for Incident Response. The following listing is an outline of some roles and responsibilities that I used when building an IR plan at a past employer. Each environment is unique, so you will need to research your own requirements and then tailor a plan that meets your needs. Generally, the types of roles that should exist within an IR function are:

Incident Response Officer – This individual is the Incident Response champion that has ultimate accountability for the actions of the IR team and IR function. This person should be an executive level employee such as a CISO or other such corporate representatives. It would be very beneficial if this individual has direct reporting access to the CEO and is a peer of other C-level executives.

Incident Response Manager – This person is the individual that leads the efforts of the IR team and coordinates activities between all of its respective groups. Normally, this person would receive initial IR alerts and be responsible for activating the IR team and managing all parts of the IR process, from discovery, assessment, remediation and finally resolution. This individual reports to the Incident Response Officer.

Incident Response Assessment Team – This group of individuals is composed of the different areas serviced by the IR team. This allows expertise from every critical discipline to weigh in on classifications and severity decisions once an incident has been identified. It is very beneficial to have representatives from IT, Security, Application Support and other business areas. In the event of an incident, the IR Manager would gather details of the incident from the affected site, begin tracking and documentation (possibly through an internal ticket management system) and then activate the Assessment Team. This group would then discuss the details of the incident and based on their expertise and knowledge of the business, would then be able to assign an initial severity. This team reports to the IR Manager.

Remote Incident Response Coordinator – This role should be assigned to qualified and capable individuals that are located in other geographic areas. These individuals ultimately report to the Incident Response Manager but in their geographic region, they are recognized as IR leaders. This will allow these assistants to manage the efforts of local custodians during an incident. This configuration is very useful, especially for organizations that have offices in multiple time zones. If an IR Manager is located in the United States but an incident occurs in a Malaysian branch, it will be helpful to have a local security leader that is able to direct efforts and provide status updates to the Incident Manager. This way, regardless of the time zone the correct actions will be invoked promptly.

Incident Response Custodians – These individuals are the technical experts and application support representatives that would be called upon to assist in the remediation and resolution of a given incident. They report to the Incident Response Manager or to the Remote IR Coordinator(s) depending on their location(s).

Once you’ve been able to identify the proper stakeholders that will form your team, you will have to provide an action framework they’ll be able to use when carrying out their responsibilities. Think of this “action framework” as a set of training wheels that will guide your IR team. What does this mean? Let’s move on to the next section to discuss this…

INCIDENT RESPONSE PROCESS FLOW

A part of outlining this framework involves the identification of IR Severity Levels. These levels will help your team understand the severity of an event and will govern the team’s response. Some suggestions for these levels are the following:

SEVERITY LEVEL LEVEL OF BUSINESS IMPACT RESOLUTION EFFORT REQUIRED
SEVERITY 1 LOW LOW EFFORT
SEVERITY 2 MODERATE MODERATE EFFORT
SEVERITY 3 HIGH EXTENSIVE, ONGOING EFFORT
SEVERITY 4 SEVERE DISASTER RECOVERY INVOKED

Earlier in this article, I mentioned the benefit of identifying any existing informal process that your company may already be following. If so, it will now be necessary for you to step through that process mentally, keeping in mind your identified severity levels so that you can start to document each step of the process. You will undoubtedly start to remove irrelevant portions of the informal process but may opt to keep certain items in place. (For example, certain notification procedures may still be useful and you may continue to use these in your new IR process to alert members of your team). If you don’t have a starting point like this and you’re starting from scratch, then perhaps the following suggestions can provide some direction.

Start to create a documented action script that will outline your response steps so your IR Manager can follow them consistently. Your script should show steps similar to the following:

STEP # ACTION
1 Incident announced
2 IR Manager alerted
3 IR Manager begins information gathering from affected site
4 IR Manager begins tracking and documentation of incident
5 IR Manager invokes Assessment Team
(Details of call bridge or other communication mechanism)
6 Assessment Team reviews details and decides on Severity Level of incident.
7 IF SEV 1 = PROCEED TO STEP #11.0
8 IF SEV 2 = PROCEED TO STEP #12.0
9 IF SEV 3 = PROCEED TO STEP #13.0
10 IF SEV 4 = PROCEED TO STEP #14.0
FOR SEVERITY LEVEL 1 – Proceed with following sequence
11.0 Determine attack vectors being used by threat
11.1 Determine network locations that are impacted
11.2 Identify areas that fall under “Parent Organization”
11.3 Identify systems or applications that are impacted
FOR SEVERITY LEVEL 2 – Proceed with following sequence
12.0 Determine attack vectors being used by threat
12.1 Alert Incident Officer to Severity 2 threat

This of course is an extremely high level example, but as you can see, it is possible to flesh out the majority of the process with specific action items for each severity level. Be sure to thoroughly research your unique environment to develop a process that fits your needs. You may have to add custom steps to cover incidents that span multiple countries and subsidiaries. Once you’ve created your process.you may want to consider developing small wallet size scripts for the members of your Assessment Team and other key players on which you will need to depend to make this run efficiently. In this way, each member will have necessary information on hand that will allow them to respond as expected.

This article just scratches the surface of the work that is required to build a full IR process but hopefully this has given you some direction and additional areas to explore when planning your next IR project!

References:

Best Email Retention Policy Practices

Email retention policies are no longer just about conserving space on your Exchange server. Today you must take into account how your email retention controls increase or decrease risk to your company.

Pros and Cons of Short and Long Email Retention Policies

Generally speaking, longer email retention policies increase the risk that a security vulnerability or unauthorized user could expose your company’s secrets or embarrassing material. Long policies also increase your company’s exposure to legal examination that focuses on conversations and decisions captured in emails (this is also known as the “paper trail” in an “eDiscovery” process).

Shorter email retention policies help avoid these problems and are cheaper to implement, but they have their own significant disadvantages as well. First, short policies tend to annoy long-term employees and often executives, who rely on old email chains to recollect past decisions and the context in which they were made. Second, short policies may violate federal, state, local and/or industry regulations that require certain types of information to be retained for a minimum period of time – often years!

Best Practices to Develop Your Email Retention Policy

Obviously, you must balance these factors and others when you develop your own email retention policy, but there are a number of best practices that can help you draft and get support for a solid email retention policy. Today, I’ll be covering five practices often used by effective professionals and managers.

Email Retention Policy Best Practice #1: Start With Regulatory Minimums

Your email retention policy should begin by listing the various regulations your company is subject to and the relevant document retention requirements involved with each regulation.

Every industry is regulated differently, and businesses are often subject to different tax, liability and privacy regulations depending on the locations in which they do business. However, some common recommended retention periods include:

If a retention period is not known for a particular type of data, seven years (the minimum IRS recommendation) is often used as a safe common denominator.

Email Retention Policy Best Practice #2: Segment As Necessary To Avoid Keeping Everything For the Legal Maximum

As you can see from the list above, recommended retention periods vary widely even within highly regulated industries. With that in mind, it often pays to segment different types or uses of email into different retention periods to avoid subjecting your entire online email store to the maximum email retention period.

Segmentation by type of content looks something like this:

  • Invoices – 7 years
  • Sales Records – 5 years
  • Petty Cash Vouchers – 3 years

Segmentation by type of use looks something like this:

  • Administrative correspondence (e.g., human resources) – 5 years
  • Fiscal correspondence (e.g., revenue and expenses) – 4 years
  • General correspondence (e.g., customer interactions, internal threads) – 3 years
  • Ephemeral correspondence (e.g., everything else business-related) – 1 year
  • Spam – not retained

Mixed segmentation is also often common and looks something like this:

  • Human resources – 7 years
  • Transaction receipts – 3 years
  • Executive email – 2 years
  • Spam – not retained
  • Everything else (e.g., “default retention policy”) – 1 year

The rules and technologies you use to inspect, classify and segment can vary from simple sender- and subject-matching to sophisticated engines that intuit intent and history. (Unfortunately, space does not permit us to examine these technologies here, but trust me – they exist!)

Email Retention Policy Best Practice #3:

Draft a Real Policy…But Don’t Include What You Won’t Enforce

A written policy, approved by legal counsel and senior management, will give you the requirements and authority to implement all the IT, security and process controls you need. If you haven’t seen a full retention policy yet, please take the time to search the web for a few, such as this template from the University of Wisconsin(Go Badgers! Sorry…proud alum.)

Note that many “email retention policy” documents (including the UW template) cover much more than email! In general, this is OK because a “document policy” gives you what you need to implement an “email policy”, but you’ll want to make a point of talking the “document vs. email” terminology through with your legal team before you finalize your policy.

A good written policy (again, including the UW template) always contains these sections:

  • Purpose: why does this policy exist? If specific regulations informed the creation of this policy, they should all be listed here.
  • Retention time, by segment: how long various types of content or content used in a particular manner must be retained (the UW template segments by type of content). Durations are often listed in years, may include triggers (e.g., “after X”) and may even be “Permanent”.
  • Differences between “paper” and “electronic” documents: ideally, none.
  • What constitutes “destruction”: usually shredding and deleting, often “secure deletion” (e.g., with overwriting) and degaussing of media where applicable.
  • Pause destruction if legal action imminent: your legal department will normally add this for you, but you can show off your legal bona fides by including a clause instructing IT to pause automatic email deletion if the company becomes the subject of a claim or lawsuit (this is also called a “litigation hold”).
  • Who is responsible: typically everyone who touches the documents, often with special roles for certain titles (e.g., “Chief Archivist”) or groups (e.g., “legal counsel”).

Good written policies omit areas that you won’t or can’t support, especially types of segmentation you will not be able to determine or support. Good policies also refer to capabilities and requirements (e.g., offsite archival) rather than specific technologies and processes (e.g., DAT with daily courier shipments).

Email Retention Policy Best Practice #4: Price Preferred Solution and Alternatives By Duration and Segment

Let’s pretend that you have a policy like the following:

  • All email: retain on fast storage for 18 months
  • Purchase transaction emails : also archive to offline storage until 5 years have passed
  • Legal emails: also archive to offline storage until 7 years have passed
  • “Fast storage” = accessible through end user’s email clients through “folders”; normally only individual users can access, but administrators and archival specialists (e.g., the legal team) can access too
  • “Offline storage” = accessible through internal utility and search; only administrators and archival specialists (e.g., the legal team) can access

To price an appropriate solution, you would restate your requirements based on number of users, expected volume of email and expected rate of growth. For example, in a 500-person company where each user averaged 1MB and 100 messages of email a day, there were 5000 additional transaction emails (total 50MB) a day and 100 additional legal emails (total 20MB) a day, and volumes were expected to increase 10% per year, here’s how we might estimate minimum requirements for the next seven years:

  • All email: 18 months x 1MB/day-person x 30 days/month x 500 people = 270GB x 1.8 (about 10% increase in 6 years) = 486GB email server storage
  • Purchase transaction emails: 5 years x 12 months/year x 30 days/month x 50MB/day = 90GB x 1.8 = 162GB email archive storage
  • Legal emails: 7 years x 12 months/year x 30 days/month x 20MB/day = 50GB x 1.8 = 91GB email archive storage
  • TOTAL: 486GB server + 253GB archive

However, after you’ve priced out your preferred solution, you still need to be prepared to handle alternatives that may result from discussions with legal or your executive team. For example, if the executive team pushes your 18 month blanket retention to 3 years and the legal team “requires” that its emails are always in near-term email storage, how would that change your requirements and pricing?

  • All email: 36 months x 1MB/day-person x 30 days/month x 500 people = 540GB x 1.8 (about 10% increase in 6 years) = 972GB email server storage
  • Purchase transaction emails: 5 years x 12 months/year x 30 days/month x 50MB/day = 90GB x 1.8 = 162GB email archive storage
  • Legal emails: 7 years x 12 months/year x 30 days/month x 20MB/day = 50GB x 1.8 = 91GB email server storage
  • TOTAL: 1063GB server + 192GB archive (e.g., DOUBLE your realtime storage!)

Long story short, if you can figure out your own rule-of-thumb per-GB price for the various types of storage necessary to support your archiving scheme (as well as licensing considerations, including any per-message or per-type-of-message rules) you’ll be better prepared for “horse trading” later in the approval process.

Email Retention Policy Best Practice #5: Once You Draft Your Policy, Include Legal Before the Executives

If you’re still reading this, chances are good that you (like me) are a senior IT or security professional, or are perhaps even a manager. If you’ve drafted other IT policies, such as an “acceptable use” policy, your first instinct might be to keep your legal team out of the process until your new policy has snowballed down from your IT-based executive sponsor. This is almost always a mistake.

The main reason legal should be included as soon as you have a draft is that two of the best practices listed above (regulatory minimums and viability of segmentation) are really legal’s call – not yours! You will have saved legal a lot of legwork by researching the main drivers of email retention policy and the technical controls you can use to enforce the policy, but at the end of the day legal will be called upon to defend the company’s decision to keep or toss critical information, so legal will need to assign the final values to your policy limits.

A second reason to include legal before your executives is that you want to present a unified front (as IT and legal) on your maximum retention limits. Once you get into negotiations with your executive team, legal will likely be pushing for even shorter limits (because it limits the threat of hostile eDiscovery) and the executives will be pushing for even longer limits (because email is their old document storage). This puts you (as IT) in the rational middle and gives your policy a good chance of making it through the negotiations relatively unscathed.

The final reason you want to include legal early is that their calls may force you to reprice the options you laid out before you talked to them, and may cause you to take some options off the table. If you reversed the process and got executives to sign off on a solution that got vetoed by legal and sent back to the executive team for a second round of “ask,” I think you know that no one would be happy.

Conclusion: Your Email Retention Policy Will Be Your Own

Given all the different constraints your organization faces and all the different ways your interactions with your legal and executive team could go, it would be impossible for me to predict what any company’s email retention policy would be. However, if you follow these five best practices when you develop your own, you stand a better-than-average chance of drafting an email retention policy that’s sensible, enforceable, and loved by legal and top management alike.

Disasters in BC

The auditor general has weighed in, and, surprise, surprise, we are not ready for an earthquake.

On the one hand, I’m not entirely sure that the auditor general completely understands disaster planning, and she hasn’t read Kenneth Myers and so doesn’t know that it can be counter-productive to produce plans for every single possibility.

On the other hand, I’m definitely with Vaugh Palmer in that we definitely need more public education.  We are seeing money diverted from disaster planning to other areas, regardless of a supposed five-fold increase in emergency budget.  In the past five years, the professional association has been defunded, training is very limited in local municipalities, and even recruitment and “thank you” events for volunteers have almost disappeared.  Emergency planning funds shouldn’t be used to pay for capital projects.

(And the province should have been prepared for an audit in this area, since they got a warning shot last year.)

So, once again, and even more importantly, I’d recommend you all get emergency training.  I’ve said it beforeI keep saying itI will keep on saying it.

(Stephen Hume agrees with me, although he doesn’t know the half of it. )

New computers – Windows 8 Phone

I was given a Win8Phone recently.  I suppose it may seem like looking a gift horse in the mouth to review it, but:

I must say, first off, that the Nokia Lumia has a lot of power compared to my other phone (and Android tablets), so I like the responsiveness using Twitter.  The antenna is decent, so I can connect to hotspots, even at a bit of a distance.  Also, this camera is a lot better than those on the three Android machines.

I’m finding the lack of functionality annoying.  There isn’t any file access on the phone itself, although the ability to access it via Windows Explorer (when you plug the USB cable into a Windows 7 or 8 computer) is handy.

I find the huge buttons annoying, and the interface for most apps takes up a lot of space.  This doesn’t seem to be adjustable: I can change the size of the font, but only for the content of an app, not for the frame or surround.

http://www.windowsphone.com/en-us/how-to/wp8 is useful: that’s how I found out how to switch between apps (hold down the back key and it gives you a set of
icons of running/active apps).

The range of apps is pathetic.  Security aside (yes, I know a closed system is supposed to be more secure), you are stuck with a) Microsoft, or b) completely unknown software shops.  You are stuck with Bing for search and maps: no Google, no Gmail.  You are stuck with IE: no Firefox, Chrome, or Safari.  Oh, sorry, yes you *can* get Firefox, Chrome, and Safari, but not from Mozilla, Google, or Apple: from developers you’ve never heard of.  (Progpack, maker(s) of the Windows Phone store version of Safari, admits it is not the real Safari, it just “looks like it.”)  You can’t get YouTube at all.  No Pinterest, although there is a LinkedIn app from LinkedIn, and a Facebook app–from Microsoft.

It’s a bit hard to compare the interface.  I’m comparing a Nokia Lumia 920 which has lots of power against a) the cheapest Android cell phone Bell had when I had to upgrade my account (ver 2.2), b) an Android 4.3 tablet which is really good but not quite “jacket” portable, and c) a Digital2 Android 4.1 mini-tablet which is probably meant for children and is *seriously* underpowered.

Don’t know whether this is the fault of Windows or the Nokia, but the battery indicators/indications are a major shortcoming.  I have yet to see any indication that the phone has been fully charged.  To get any accurate reading you have to go to the battery page under settings, and even that doesn’t tell you a heck of a lot.  (Last night when I turned it off it said the battery was at 46% which should be good for 18 hours.  After using it four times this morning for a total of about an hour screen time and two hours standby it is at 29%.)

(When I installed the Windows Phone app on my desktop, and did some file transfers while charging the phone through USB I found that the app has a battery level indicator on most pages, so that’s helpful.)

Crafting a Pen Testing Report

You close the lid of your laptop; it’s been a productive couple of days. There are a few things that could be tightened up, but overall the place isn’t doing a bad job. Exchange pleasantries with the people who have begrudgingly given up time to escort you, hand in your visitors badge and head for the door. Just as you feel the chill of outside against your skin, you hear a muffed voice in the background.

“Hey, sorry, I forgot to ask, when can we expect the report?”

Sound familiar?

Ugh, the report. Penetration testing’s least favorite cousin, but ultimately, one of the most important.

There are thousands of books written about information security and pen testing. There are hundreds of hours of training courses that cover the penetration testing process. However, I would happily wager that less than ten percent of all the material out there is dedicated to reporting. This, when you consider that you probably spend 40-50% of the total duration of a pen test engagement actually writing the report, is quite alarming.

It’s not surprising though, teaching someone how to write a report just isn’t as sexy as describing how to craft the perfect buffer overflow, or pivot round a network using Metasploit. I totally get that, even learning how the TCP packet structure works for the nineteenth time sounds like a more interesting topic.

A common occurrence amongst many pen testers. Not allowing enough time to produce a decent report.

No matter how technically able we are as security testers, it is often a challenge to explain a deeply technical issue to someone who may not have the same level of technical skill. We are often guilty of making assumptions that everyone who works in IT has read the same books, or has the same interests as us. Learning to explain pen test findings in a clear and concise way is an art form, and one that every security professional should take the time to master. The benefits of doing so are great. You’ll develop a better relationship with your clients, who will want to make use of your services over and over again. You’ll also save time and money, trust me. I once drove a 350 mile round trip to go and explain the contents of a penetration test report to a client. I turned up, read some pages of the report aloud with added explanations and then left fifteen minutes later. Had I taken a tiny bit more time clarifying certain issues in my report, I would have saved an entire day of my time and a whole tank of gas.

Diluted: “SSH version one should be disabled as it contains high severity vulnerabilities that may allow an attacker already on the network to intercept and decrypt communications, although the risk of an attacker gaining access to the network is very low, so this reduces the severity.”

Clarified: “It is advisable to disable SSH version one on these devices, failure to do so could allow an attacker with local network access to decrypt and intercept communications.”

Why is a penetration test report so important?

Never forget, penetration testing is a scientific process, and like all scientific processes it should be repeatable by an independent party. If a client disagrees with the findings of a test, they have every right to ask for a second opinion from another tester. If your report doesn’t detail how you arrived at a conclusion, the second tester will have no idea how to repeat the steps you took to get there. This could lead to them offering a different conclusion, making you look a bit silly and worse still, leaving a potential vulnerability exposed to the world.

Bad: “Using a port scanner I detected an open TCP port”.

    Better: “Using Nmap 5.50, a port scanner, I detected an open TCP port using the SYN scanning technique on a selected range of ports. The command line was: nmap –sS –p 7000-8000.”

The report is the tangible output of the testing process, and the only real evidence that a test actually took place. Chances are, senior management (who likely approved funding for the test) weren’t around when the testers came into the office, and even if they were, they probably didn’t pay a great deal of attention. So to them, the report is the only thing they have to go on when justifying the expense of the test. Having a penetration test performed isn’t like any other type of contract work. Once the contract is done there is no new system implemented, or no new pieces of code added to an application. Without the report, it’s very hard to explain to someone what exactly they’ve just paid for.

Who is the report for?

While the exact audience of the report will vary depending on the organization, it’s safe to assume that it will be viewed by at least three types of people.

Senior management, IT management and IT technical staff will all likely see the report, or at least part of it. All of these groups will want to get different snippets of information. Senior management simply doesn’t care, or doesn’t understand what it means if a payment server encrypts connections using SSL version two. All they want to know is the answer to one simple question “are we secure – yay or nay?”

IT management will be interested in the overall security of the organization, but will also want to make sure that their particular departments are not the cause of any major issues discovered during testing. I recall giving one particularly damming report to three IT managers. Upon reading it two of them turned very pale, while the third smiled and said “great, no database security issues then”.

IT staff will be the people responsible for fixing any issues found during testing. They will want to know three things. The name of the system affected, how serious the vulnerability is and how to fix it. They will also want this information presented to them in a way that is clear and organized. I find the best way is to group this information by asset and severity. So for example, “Server A” is vulnerable to “Vulnerability X, Y and Z. Vulnerability Y is the most critical”. This gives IT staff half a chance of working through the list of issues in a reasonable timeframe. There is nothing worse than having to work your way backwards and forwards through pages of report output to try and keep track of vulnerabilities and whether or not they’ve been looked at.

Of course, you could always ask your client how they would like vulnerabilities grouped. After all, the test is really for their benefit and they are the people paying! Some clients prefer to have a page detailing each vulnerability, with affected assets listed under the vulnerability title. This is useful in situations where separate teams may all have responsibilities for different areas of a single asset. For example, the systems team runs the webserver, but the development team writes the code for the application hosted on it.

Although I’ve mentioned the three most common audiences for pen test reports, this isn’t an exhaustive list. Once the report is handed over to the client, it’s up to them what they do with it. It may end up being presented to auditors, as evidence that certain controls are working. It could be presented to potential customers by the sales team. “Anyone can say their product is secure, but can they prove it? We can, look here is a pen test report”.

Reports might even end up getting shared with the whole organization. It sounds crazy, but it happens. I once performed a social engineering test, the results of which were less than ideal for the client. The enraged CEO shared the report with the whole organization, as a way of raising awareness of social engineering attacks. This was made more interesting, when I visited that same company a few weeks later to deliver some security awareness training. During my introduction, I explained that my company did security testing and was responsible for the social engineering test a few weeks back. This was greeted with angry stares and snide comments about how I’d gotten them all into trouble. My response was, as always, “better to give me your passwords than a genuine bad guy”.

What should the report contain?

Sometimes you’ll get lucky and the client will spell out exactly what they want to see in the report during the initial planning phase. This includes both content and layout. I’ve seen this happen to extreme levels of detail, such as what font size and line spacing settings should be used. However, more often than not, the client won’t know what they want and it’ll be your job to tell them.

So without further ado, here are some highly recommended sections to include in pen test reports.

  • A Cover Sheet. This may seem obvious, but the details that should be included on the cover sheet can be less obvious. The name and logo of the testing company, as well as the name of the client should feature prominently. Any title given to the test such as “internal network scan” or “DMZ test” should also be up there, to avoid confusion when performing several tests for the same client. The date the test was performed should appear. If you perform the same tests on a quarterly basis this is very important, so that the client or the client’s auditor can tell whether or not their security posture is improving or getting worse over time. The cover sheet should also contain the document’s classification. Agree this with the client prior to testing; ask them how they want the document protectively marked. A penetration test report is a commercially sensitive document and both you and the client will want to handle it as such.
  • The Executive Summary. I’ve seen some that have gone on for three or four pages and read more like a Jane Austen novel than an abbreviated version of the report’s juicy bits. This needs to be less than a page. Don’t mention any specific tools, technologies or techniques used, they simply don’t care. All they need to know is what you did, “we performed a penetration test of servers belonging to X application”, and what happened, “we found some security problems in one of the payment servers”. What needs to happen next and why “you should tell someone to fix these problems and get us in to re-test the payment server, if you don’t you won’t be PCI compliant and you may get a fine”. The last line of the executive summary should always be a conclusion that explicitly spells out whether or not the systems tested are secure or insecure, “overall we have found this system to be insecure”. It could even be just a single word.

A bad way to end an executive summary: “In conclusion, we have found some areas where security policy is working well, but other areas where it isn’t being followed at all. This leads to some risk, but not a critical amount of risk.”

A better way: “In conclusion, we have identified areas where security policy is not being adhered to, this introduces a risk to the organization and therefore we must declare the system as insecure.”

  • Summary of Vulnerabilities. Group the vulnerabilities on a single page so that at a glance an IT manager can tell how much work needs to be done. You could use fancy graphics like tables or charts to make it clearer – but don’t overdo it. Vulnerabilities can be grouped by category (e.g. software issue, network device configuration, password policy), severity or CVSS score –the possibilities are endless. Just find something that works well and is easy to understand.

  • Test Team Details. It is important to record the name of every tester involved in the testing process. This is not just so you and your colleagues can be hunted down should you break something. It’s a common courtesy to let a client know who has been on their network and provide a point of contact to discuss the report with. Some clients and testing companies also like to rotate the testers assigned to a particular set of tests. It’s always nice to cast a different set of eyes over a system. If you are performing a test for a UK government department under the CHECK scheme, including the name of the team leader and any team members is a mandatory requirement.
  • List of the Tools Used. Include versions and a brief description of the function. This goes back to repeatability. If anyone is going to accurately reproduce your test, they will need to know exactly which tools you used.

  • A copy of the original scope of work. This will have been agreed in advance, but reprinting here for reference purposes is useful.
  • The main body of the report. This is what it’s all about. The main body of the report should include details of all detected vulnerabilities, how you detected the vulnerability, clear technical expiations of how the vulnerability could be exploited, and the likelihood of exploitation. Whatever you do, make sure you write your own explanations, I’ve lost count of the number of reports that I’ve seen that are simply copy and paste jobs from vulnerability scanner output. It makes my skin crawl; it’s unprofessional, often unclear and irrelevant. Detailed remediation advice should also be included. Nothing is more annoying to the person charged with fixing a problem than receiving flakey remediation advice. For example, “Disable SSL version 2 support” does not constitute remediation advice. Explain the exact steps required to disable SSL version 2 support on the platform in question. As interesting as reading how to disable SSL version 2 on Apache is, it’s not very useful if all your servers are running Microsoft IIS. Back up findings with links to references such as vendor security bulletins and CVE’s.

Getting the level of detail in a report right is a tricky business. I once wrote a report that was described as “overwhelming” because it was simply too detailed, so on my next test I wrote a less detailed report. This was subsequently rejected because it “lacked detail”. Talk about moving the goalposts. The best thing to do is spend time with the client, learn exactly who the audience will be and what they want to get out of the report.

Final delivery.

When a pilot lands an airliner, their job isn’t over. They still have to navigate the myriad of taxiways and park at the gate safely. The same is true of you and your pen test reports, just because its finished doesn’t mean you can switch off entirely. You still have to get the report out to the client, and you have to do so securely. Electronic distribution using public key cryptography is probably the best option, but not always possible. If symmetric encryption is to be used, a strong key should be used and must be transmitted out of band. Under no circumstances should a report be transmitted unencrypted. It all sounds like common sense, but all too often people fall down at the final hurdle.

CyberSec Tips: E-Commerce – tip details 2 – fake sites

Following on with some more of the tips from an earlier post, originally published here:

The next three tips are pretty straightforward, and should be followed:
Don’t click on offers in email.
If it sounds too good to be true, don’t fall for it.
Don’t fall for fake eBay or PayPal sites.

Good advice all around.  In terms of fake eBay or PayPal sites, check the URLs, if you can see them, or the places you end up.  Often fraudsters will try and register sites with odd variations on the name, such as replacing the lower case letter l in PayPal with a digit 1, which can look similar: paypal.com vs paypa1.com.  Or they will send you to a subdirectory on either a legitimate site (for example, googledocs.com/paypal) or on a straight scam site (frauds.ru/paypal).  Or sometimes the URL is simply a mess of characters.  If the site isn’t pretty clearly the one you want, get out of there.

CyberSec Tips: Malware – advice for the sysadmin

This is possibly a little out of line with what I’m trying to do with the series.  This advice is aimed a little higher than the home user, or small business operator with little computer experience.  Today I got these questions from someone with an advanced computer background, and solid security background, but no malware or antivirus experience.  I figured that this might apply to a number of people out there, so here was my advice:

 

> Question 1: What is the best way to obtain some good virus samples to
> experiment with in a clean-room environment?

Just look for anything large in your spam filters  :-)

> What I see doing is setting up a VM that is connected to an isolated
> network (with no connection to any other computer or the internet except
> for a computer running wireshark to monitor any traffic generated by the
> virus/malware).

VMs are handy when you are running a wholesale sample gathering and analysis operation, but for a small operation I tend not to trust them.  You might try running Windows under a Mac or Linux box, etc.  Even then, some of the stuff is getting pretty sneaky, and some specifically target VMs.  (I wonder how hard it would be to run Windows in a VM under iOS on ARM?)

> Also, any other particular recommendations as to how to set up the
> clean-room environment?

I’m particularly paranoid, especially if you haven’t had a lot of background in malware, so I’d tend to recommend a complete airgap, with floppies.  (You can still get USB 3 1/2″ floppy drives.)  CDs might be OK, but USB drives are just getting too complex to be sure.

> Question 2: What products are recommended for removing viruses and malware
> (i.e. is there a generic disinfector program that you recommend)?

I wouldn’t recommend a generic for disinfection.  For Windows, after the disaster of MSAV, MSE is surprisingly good, and careful–unlikely to create more problems than it solves.  I like Avast these days: even the free version gives you a lot of control, although it seems to be drifting into the “we know what’s best for you” camp.  And Sophos, of course, is solid stuff, and has been close to the top of the AV heap for over two decades.  F-Secure is good, although they may be distracted by the expansion they are doing of late.  Kaspersky is fine, though opinionated.  Eset has long had an advantage in scanning speed, but it does chew up machine cycles when operating.

Symantec/Norton, McAfee, and Trend have always had a far larger share of the market than was justified by their actual products.

As always, I recommend using multiple products for detection.

> I assume the preferred approach is to boot the suspect computer from USB
> and to run the analysis/disinfection software from the USB key (i.e. not to boot
> the infected computer until it has been disinfected).

A good plan.  Again, I might recommend CD/DVD over USB keys, but, as long as you are careful that the USB drive is clean …

> Question 3: How/when does one make the decision to wipe the hard drive and
> restore from backup rather than attempt to remove the malware?

If you have an up-to-date backup, that is always preferred when absolute security is the issue.  However, the most common malware is going to be cleanable fairly easily.  (Unless you run into some of the more nasty ransomware.)

Pushing backup, and multiple forms of backup, on all users and systems, is a great idea for all kinds of problems.  I’ve got a “set and forget” backup running to a USB drive that automatically updates any changes about every fifteen minutes.  And every couple of days I make a separate backup (and I have different USB drives I do it to) of all data files–which I then copy on to one of the laptops.  I just use an old batch file I created, which replaces any files with newer versions.  (Since it doesn’t delete anything I don’t change, it also means I have recovery possibilities if I make a mistake with deleting anything, and, by using multiple drives, I can rotate them for offsite storage, and even have possibilities of recovering old versions.)

> Question 4: Any recommended books or other guides to this subject matter?

Haven’t seen anything terrifically useful recently, unfortunately.  David Harley and I released “Viruses Revealed” as public domain a few years back, but it’s over ten years old.  (We released it about the time a vxer decided to upload it to http://vxheavens.com/lib/ars08.html  He probably thought he was hurting our sales, but we figured he was doing us a favour  :-)