Sec Tools

REVIEW: “Debug It: Find, Repair, and Prevent Bugs in Your Code”, Paul Butcher

BKDEBGIT.RVW   20130122

“Debug It: Find, Repair, and Prevent Bugs in Your Code”, Paul Butcher, 2009, U$34.95/C$43.95, 978-1-93435-628-9
%A   Paul Butcher paul@paulbutcher.com
%C   2831 El Dorado Pkwy, #103-381 Frisco, TX 75033
%D   2009
%G   978-1-93435-628-9 1-93435-628-X
%I   Pragmatic Bookshelf
%O   U$34.95/C$43.95 sales@pragmaticprogrammer.com 800-699-7764
%O  http://www.amazon.com/exec/obidos/ASIN/193435628X/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/193435628X/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/193435628X/robsladesin03-20
%O   Audience n- Tech 2 Writing 1 (see revfaq.htm for explanation)
%P   214 p.
%T   “Debug It: Find, Repair, and Prevent Bugs in Your Code”

The preface states that there are lots of books in the market that teach development and few that teach debugging.  (In my experience, a great many development books include debugging advice, so I’m not sure where the author’s perception comes from.)  The work is structured around a core method for debugging: reproduce, diagnose, fix, and reflect.

Part one presents the basic technique.  Chapter one repeats the description of this core method.  Chapter two encourages the reproduction of the bug.  (This can be more complex than the author lets on.  I have a netbook with some bug in the hibernation function.  Despite constant observation over a period of three and a half years, I’ve yet to find a combination of conditions that reproduces the failure, nor one that prevents it.)  Some of the suggestions given are useful, if pedestrian, while others are pretty pointless.  (Butcher does not address the rather thorny issue of using “real” data for testing.)  In terms of diagnosis, in chapter three, there is limited description of process, but lots of good tips.  The same is true of fixing, in chapter four.  (I most definitely agree with the recommendation to fix underlying causes, rather than effects.)  Reflection, the topic of chapter five, is limited to advice that the problem be considered even after you’ve fixed it.

Part two explores the larger picture.  Chapter six examines bug tracking systems, and eliciting feedback from users and support staff.  Chapter seven advises on trying to address the bugs, but concentrates on “fix early,” with little discussion of priorities or ranking systems.

Part three, entitled “Debug Fu,” turns to related and side issues.  The “Special Cases” in chapter eight seem to be fairly common: software already released, compatibility issues, and “heisenbugs” that disappear when you try to track them.  Chapter nine, on the ideal debugging environment, is about as practical as most such exercises.  “Teach Your Software to Debug Itself” in chapter ten seems confined to a few specific cases.  Chapter eleven notes some common problems in development teams and structures.

The advice in the book is good, and solid, but not surprising to anyone with experience.  Novices who have not considered debugging much will find it useful.

copyright, Robert M. Slade   2013   BKDEBGIT.RVW   20130122

The Common Vulnerability Scoring System

Introduction

This article presents the Common Vulnerability Scoring System (CVSS) Version 2.0, an open framework for scoring IT vulnerabilities. It introduces metric groups, describes base metrics, vector, and scoring. Finally, an example is provided to understand how it works in practice. For a more in depth look into scoring vulnerabilities, check out the ethical hacking course offered by the InfoSec Institute.

Metric groups

There are three metric groups:

I. Base (used to describe the fundamental information about the vulnerability—its exploitability and impact).
II. Temporal (time is taken into account when severity of the vulnerability is assessed; for example, the severity decreases when the official patch is available).
III. Environmental (environmental issues are taken into account when severity of the vulnerability is assessed; for example, the more systems affected by the vulnerability, the higher severity).

This article is focused on base metrics. Please read A Complete Guide to the Common Vulnerability Scoring System Version 2.0 if you are interested in temporal and environmental metrics.

Base metrics

There are exploitability and impact metrics:

I. Exploitability

a) Access Vector (AV) describes how the vulnerability is exploited:
– Local (L)—exploited only locally
– Adjacent Network (A)—adjacent network access is required to exploit the vulnerability
– Network (N)—remotely exploitable

The more remote the attack, the more severe the vulnerability.

b) Access Complexity (AC) describes how complex the attack is:
– High (H)—a series of steps needed to exploit the vulnerability
– Medium (M)—neither complicated nor easily exploitable
– Low (L)—easily exploitable

The lower the access complexity, the more severe the vulnerability.

c) Authentication (Au) describes the authentication needed to exploit the vulnerability:
– Multiple (M)—the attacker needs to authenticate at least two times
– Single (S)—one-time authentication
– None (N)—no authentication

The lower the number of authentication instances, the more severe the vulnerability.

II. Impact

a) Confidentiality (C) describes the impact of the vulnerability on the confidentiality of the system:
– None (N)—no impact
– Partial (P)—data can be partially read
– Complete (C)—all data can be read

The more affected the confidentiality of the system is, the more severe the vulnerability.

+b) Integrity (I) describes an impact of the vulnerability on integrity of the system:
– None (N)—no impact
– Partial (P)—data can be partially modified
– Complete (C)—all data can be modified

The more affected the integrity of the system is, the more severe the vulnerability.

c) Availability (A) describes an impact of the vulnerability on availability of the system:
– None (N)—no impact
– Partial (P)—interruptions in system’s availability or reduced performance
– Complete (C)—system is completely unavailable

The more affected availability of the system is, the more severe the vulnerability.

Please note the abbreviated metric names and values in parentheses. They are used in base vector description of the vulnerability (explained in the next section).

Base vector

Let’s discuss the base vector. It is presented in the following form:

AV:[L,A,N]/AC:[H,M,L]/Au:[M,S,N]/C:[N,P,C]/I:[N,P,C]/A:[N,P,C]

This is an abbreviated description of the vulnerability that brings information about its base metrics together with metric values. The brackets include possible metric values for given base metrics. The evaluator chooses one metric value for every base metric.

Scoring

The formulas for base score, exploitability, and impact subscores are given in A complete Guide to the Common Vulnerability Scoring System Version 2.0 [1]. However, there in no need to do the calculations manually. There is a Common Vulnerability Scoring System Version 2 Calculator available. The only thing the evaluator has to do is assign metric values to metric names.

Severity level

The base score is dependent on exploitability and impact subscores; it ranges from 0 to 10, where 10 means the highest severity. However, CVSS v2 doesn’t transform the score into a severity level. One can use, for example, the FortiGuard severity level to obtain this information:

FortiGuard severity level CVSS v2 score
Critical 9 – 10
High 7 – 8.9
Medium 4 – 6.9
Low 0.1 – 3.9
Info 0

Putting the pieces together

An exemplary vulnerability in web application is provided to better understand how Common Vulnerability Scoring System Version 2.0 works in practice. Please keep in mind that this framework is not limited to web application vulnerabilities.

Cross-site request forgery in admin panel allows adding a new user and deleting an existing user or all users.

Let’s analyze first the base metrics together with the resulting base vector:

Access Vector (AV): Network (N)
Access Complexity (AC): Medium (M)
Authentication (Au): None (N)

Confidentiality (C): None (N)
Integrity (I): Partial (P)
Availability (A): Complete (C)

Base vector: (AV:N/AC:M/Au:N/C:N/I:P/A:C)

Explanation: The admin has to visit the attacker’s website for the vulnerability to be exploited. That’s why the access complexity is medium. The website of the attacker is somewhere on the Internet. Thus the access vector is network. No authentication is required to exploit this vulnerability (the admin only has to visit the attacker’s website). The attacker can delete all users, making the system unavailable for them. That’s why the impact of the vulnerability on the system’s availability is complete. Deleting all users doesn’t delete all data in the system. Thus the impact on integrity is partial. Finally, there is no impact on the confidentiality of the system provided that added user doesn’t have read permissions on default.

Let’s use the Common Vulnerability Scoring System Version 2 Calculator to obtain the subscores (exploitability and impact) and base score:

Exploitability subscore: 8.6
Impact subscore: 7.8
Base score: 7.8

Let’s transform the score into a severity level according to FortiGuard severity levels:

FortiGuard severity level: High

Summary

This article described an open framework for scoring IT vulnerabilities—Common Vulnerability Scoring System (CVSS) Version 2.0. Base metrics, vector and scoring were presented. An exemplary way of transforming CVSS v2 scores into severity levels was described (FortiGuard severity levels). Finally, an example was discussed to see how all these pieces work in practice.

Dawid Czagan is a security researcher for the InfoSec Institute and the Head of Security Consulting at Future Processing.

Fuzzing Samsung Kies

Android fuzzing is always fun – seems that whenever we fuzz an android app it crashes within seconds.

Samsung Kies was no different. With the help of the talented Juan Yacubian (who built the Kies module in no time) we launched beSTORM against Kies… And saw it crash in record 23 seconds (just over 1,100 attack combinations).

Next on the agenda: install gdb for Android and build the proper payload.

Samsung Kies Crash

 

REVIEW: “Security and Privacy for Microsoft Office 2010 Users”, Mitch Tulloch

BKSCPRO2.RVW   20121122

“Security and Privacy for Microsoft Office 2010 Users”, Mitch Tulloch,
2012, 0735668833, U$9.99
%A   Mitch Tulloch info@mtit.com www.mtit.com
%C   1 Microsoft Way, Redmond, WA   98052-6399
%D   2012
%G   0735668833
%I   Microsoft Press
%O   U$9.99 800-MSPRESS fax: 206-936-7329 mspinput@microsoft.com
%O  http://www.amazon.com/exec/obidos/ASIN/0735668833/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/0735668833/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0735668833/robsladesin03-20
%O   Audience n- Tech 1 Writing 1 (see revfaq.htm for explanation)
%P   100 p.
%T   “Security and Privacy for Microsoft Office 2010 Users”

Reducing the complex jargon in the introduction to its simplest terms, this book is intended to allow anyone who uses the Microsoft Office 2010 suite, or the online Office 365, to effectively employ the security functions built into the software.  Chapter one purports to present the “why” of security, but does a very poor job of it.  Company policy is presented as a kind of threat to the employee, and this does nothing to ameliorate the all-too-common perception that security is there simply to make life easier for the IT department, while it makes work harder for everyone else.

Chapter two examines the first security function, called “Protected View.”  The text addresses issues of whether or not you can trust a document created by someone else, and mentions trusted locations.  (Trusted locations seem simply to be defined as a specified directory on your hard drive, and the text does not discuss whether merely moving an unknown document into this directory will magically render it trustworthy.  Also, the reader is told how to set a trusted location, but not an area for designating untrusted files.)  Supposedly “Protected View” will automatically restrict access to, and danger from, documents you receive from unknown sources.  Unfortunately, having used Microsoft Office 2010 for a couple of years, and having received, in that time, hundreds of documents via email and from Web sources, I’ve never yet seen “Protected View,” so I’m not sure how far I can trust what the author is telling me.  (In addition, Tulloch’s discussion of viruses had numerous errors: Concept came along five years before Melissa, and some of the functions he attributes to Melissa are, in fact, from the CHRISTMA exec over a decade earlier.)

Preparation of policy is promised in chapter three, but this isn’t what most managers or security professionals would think of as policy: it is just the provision of a function for change detection or digital signatures.  It also becomes obvious, at this point, that Microsoft Office 2010 and Office 365 can have significantly different operations.  The material is quite confusing with references to a great many programs which are not part of the two (2010 and 365) MS Office suites.

Chapter four notes the possibility of encryption with a password, but the discussion of rights is unclear, and a number of steps are missing.

An appendix lists pointers to a number of references at Microsoft’s Website.

The utility of this work is compromised by the fact that it provides instructions for functions, but doesn’t really explain how, and in what situations, the functions can assist and protect the user.  Any employee using Microsoft Office will be able to access the operations, but without understanding the concepts they won’t be able to take advantage of what protection they offer.

copyright, Robert M. Slade   2012     BKSCPRO2.RVW   20121122