Vulnerability Remediation: No More Traffic Signals

Thursday, March 22, 2012

Ed Bellis

48f758be63686a73484a7380e94f73d0

Red, Yellow, Ugh...

I have been frustrated by the state of prioritization in security for several years.

recently wrote about how a data-driven approach can help prioritize remediation when there are a large amount of issues to contend with. It seems that much of the industry got together years ago and decided we could drop millions of issues into buckets of red, yellow and green.

cross town trafficSimple. Now all I have to do is start with all the red issues and fix those right away, after all they are RED!

The problem with this approach when you dig into the issues is that prioritization can be complex and include a lot of different factors.

Adding to the complexity, those factors are often different from organization to organization.

I am all for breaking things down to their simplest parts but doing so by obfuscating the complex factors that go into this, NOT eliminating them.

Information Security Needs a Decisioning System

Lets start with a seemingly simple example from a world I know well.

What factors should we know about a defect or vulnerability to help guide how we prioritize remediation? Here are a few things directly off the top of my head...

  • Exploitability
    • How easy is it to exploit this vulnerability?
    • Are there publicly available exploits through Metasploit, Core, ExploitDB?
    • What is the access vector? Does it require a multi-vector attack? Is it behind authentication or an additional control thus reducing the attack surface?
  • Asset Affected
    • How do we value this particular asset?
    • What type of data is stored or processed by this asset?
    • Is the asset part of a larger system (see Business Process Affected)?
    • Are there specific confidentiality, impact or availability requirements tied to this asset?
    • Are there compliance requirements or additional SLA's for this asset?
  • Network Location
    • Is the vulnerability/asset publicly available?
    • Does it sit within a DMZ or core network?
    • What additional assets or systems can a threat agent access from here (see Multi-Vector Attacks)
  • Business Processes Affected
    • Is the asset above part of an important business process?
    • Does exploiting this vulnerability interrupt or compromise this process? (CIA)
  • Number of Users Affected
    • How many users are affected by an exploit of this vulnerability?
    • Is the attack directly against users of the application?
  • Discoverability
    • How easy is it to discover this vulnerability? (through automated tools, specialized skills, etc)

There are likely several more including some unique to your company but lets use this quick list for brevity sake.

A Simple Example

So lets apply this criteria to a single example. Vulnerability: Persistent XSS on public web site www.foo.com

  • Cross Site Scripting issues can vary quite a bit but we'll call this one trivial to exploit. While there isn't a publicly available exploit as this is a custom web application, it can be exploited with a small amount of skill and a browser.
  • www.foo.com is our public facing web site. It doesn't process much data and mostly serves as informational. It's important for our sales, marketing and public relations.
  • Our public site is available directly and not behind authentication. There are several systems within the DMZ that can be accessed from www.foo.com. Some of these systems include processors of "confidential" but not "sensitive" information.
  • The primary business processes associated with the site is public relations and marketing.
  • Our public site receives millions of unique visits per month and the exploit of this vulnerability can directly attack these visitors. The payloads can vary but assume the worst.
  • Discovering this cross site scripting issue is trivial and can be done through automated tools or manually via a web browser.

In this simple example we start to get a feel for how serious or not this vulnerability is. Just by running through this off the cuff list of decisioning factors I can see how this can result in an attack against a large amount of our users and the likelihood is fairly high based on the lack of skills needed to both discover and exploit this defect.

We Can Be More Quant Than This

Have I over simplified this? You bet. There are likely several other factors that drive prioritization here, including competing with other priorities (opportunity costs). I've also simplified this down to a qualitative decision but there's no reason why this can't be more quantitative.

My point here is even with a short, simple off the cuff list can bring a lot more relevant factors to my remediation priorities than red, yellow and green.

Cross-posted from the Risk I/O blog

Possibly Related Articles:
7765
Vulnerabilities
Information Security
Methodologies Vulnerabilities Web Application Security Network Security Analysis Remediation Decisioning data-driven security prioritization
Post Rating I Like this!
Default-avatar
Arjen de Landgraaf Yes, and we (Bricade) are a vulnerability alerting service using color indication for criticality level.
So that with any vulnerability our subscriber can instantly see what to first look at, based on
what the subscriber see as important for their own environment Where Red has higher priority,
each alerting specifics is set per subject, based on their own infrastructure and environmental specifics

Then next to that a generic CVSS value is added (often from NIST) the moment they become available.
As a subscriber tool they can also calculate their own CVSS. See http://www.first.org/cvss
An example of a (free) calculator can be found at: https://www.bricade.com/calc4.asp.

In short, a simple off the cuff risk calculation using CVSS can bring a lot more relevant factors to your
remediation priorities IN ADDITION TO red, yellow and green as an immediate first risk indication.
1332595652
8845ac2b3647d7e9dbad5e7dd7474281
Phil Agcaoili
It sounds like you're proposing many elements of what Pete Herzog has already defined with Open Source Security Testing Methodology Manual (OSSTMM).

There is a disconnect today with vulnerability data, incident data, and GRC. I believe that all worlds will collide and provide much better evidence-based security coupled with applicable risk models.
1332645653
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.