Asset categorization (or, why I like CVSS)

A security group *must* know the value of the assets that they are protecting. Ideally, you determine this value *before* designing your security infrastructure. You cannot design an optimized security architecture without defining critical assets…yet, I see it happening all the time. Security gets worked in on the back end. That’s a problem.

Along a similar vein, Vulnerability scanners are a great tool if deployed at the correct time and used correctly. However, a vulnerability scanner cannot tell you the monetary worth of the system that it has just scanned. I’ve seen too many companies that crank up Nessus, run a scan of an entire /16 block, and then start remediating from the top of the report to the bottom. Again, that’s a problem.

So, how does that tie into CVSS? Well, CVSS is a system for assigning a numeric value to a specific flaw. There are a number of factors which go into determining this value; however, the end result is just a positive integer between 0 and 10. This information, coupled with the asset value, gives you a clearly defined list of remediation priority. Multiply the asset value with the CVSS ranking. Presto! You have a prioritized list to give to your Compliance team.

!Dmitry

Share
  • jsk

    hmmm.. Why is mitigating from the top down in terms of high to low vulnerabilities a problem? I agree that weight should be given for your more critical assets, however a high vulnerability on any system in a secure enviornment, even if the system itself is not critical, is a problem worth mitigating. I mainly take this pov because of the age old mindset that you are as strong as your weakest link.

  • http://blogs.securiteam.com/index.php/archives/author/mattmurphy/ Matthew Murphy

    The reason that mindset is flawed is because Nessus and other scanners not only fail to assess the value of the system they’re scanning, but also the exposure of the system they’re scanning.

    Which is a greater vulnerability:

    1) A remotely-exploitable vulnerability in a corporate workstation behind the firewall that allows root privileges.

    OR

    2) A remotely-exploitable vulnerability in a mail server that allows unprivileged access.

    #1 is certainly dangerous, because it allows for subversion of internal accounting and potentially elevation of rights. However, the type of system that would suffer from it isn’t exposed to attack from the internet.

    #2 is (IMO) more dangerous, because mail servers handle a much greater amount of potentially-sensitive data and they’re more exposed.

    Therefore, even your “weakest link” strategy requires an assessment of which systems are more critical to overall network security. That’s because systems that are less vulnerable but more exposed to attack are just as likely (if not more likely) to fall victim to an attacker.

    Risk numbers for individual vulnerabilities are one part of the compliance management issue, but they’re severely over-emphasized.

  • http://www.whiteacid.org WhiteAcid

    My network security lecturer has always tought us to protect something taking into account the value of what you are protecting. Hence credit card details take higher precidence than some trivial accidental site structure disclosure.

    I think being able to notice a chain in a link (for instance a mail server) is simply a skill required when assessing the value of something.

    I don’t think he’d agree to having lax security in one aspect of a system if a security breach could affect another aspect though. He would consider treating the entire system as one entity and securing the whole thing as such.

    One problem with this is scrip kiddies. While there may not be any monetary gain from hacking something because the effort is greater than the information found SKs would do it anyway because of the fuzzy feeling they get.

    Mine/his 2 cents.

  • http://www.BeyondSecurity.com noam

    A much better approach would be to provide values for your assets, from let say 1-10, give values to vulnerabilities of 1, 2, 4 (Low, Medium and High), and then when giving the results to the user, show the number computation.

    i.e. your mail server would be 8, your web server 10, your desktop 1, therefore a high on your desktop would be calculated as a 4, while a medium on your web server would be calculated as a 40, and a high on your mail server would be calculated as a 32.

    This can help you prioritize, but not help you solve the problems, which is what you need to be working on… and which is what the title is talking about, however, I don’t agree with what the article states, that CVSS is the answer.

  • jsk

    I guess I am used to doing things on a much broader and bigger scale. When I do an assessment where I am at now, say I assess everything in the DMZ. For us, we produce a report that goes to the System Administrators of all systems in the DMZ. I could care less if it is a jump box or a mail server. IF it is IN the dmz, then it better be hardened as much as possible.

    I do agree that all systems on an intranet should be ranked. For us, we tend to just spend more energy on the business critical systems, which mostly reside in an enclave or dmz to begin with.

    When I do full site assessment, then a seperate report is created for each application or area of responsibility. Where I am at, the person responsible for desktops/workstations is not the same as the guy responsible for windows servers.

    I do agree, that if I were doing a full company assessment, and the roles of the IT group were not seperated, then it would be best to weight the vulnerabilities with the criticality and sensitivity of the system.

    jsk