Vulnerabilities. While managing software and infrastructure vulnerabilities isn’t likely to make it onto anyone’s “Top 10 Sexiest InfoSec Projects” countdown (the SANS Critical Security Controls notwithstanding), all organizations have vulnerabilities and all have processes of various levels of (im)maturity for managing them. Given this, it was took me by surprise to see a throwaway Twitter post of mine on the topic attract a bit of attention. I’ll take a moment to describe the analysis project underway at my organization, our goals, and what we’re bringing to bear on this problem.

I’ve spoken at other venues on how my organization has been working on developing a better approach for managing the never ending flood of vulnerabilities in the face of constrained remediation efforts. Our current approach has several key metrics derived from elements such as: (1) the locations of network-based threat sources, (2) the amount and variety of data contained on our various assets, and (3) the known weaknesses present in our environment. With these factors, we leverage a variety of both open and commercialanalysis tools to inform our risk posture, where our biggest opportunities lie, and whether or not we are, at a high level, winning.

A weakness in this process is our reliance on CVSS scores as the key indicator of a vulnerability’s severity. Michael Roytman, Ed Bellis, and the rest of the fine folks at Risk I/O have done extensive writings, webinars, and presentations on the limitations of CVSS. I’ll encourage the reader to check out some of their work for a full discussion on these issues and simply state that CVSS appears to be a poor indicator of how likely a vulnerability will be used by an attacker. The Heartbleedbug is an excellent example of vuln creating a ZOMG level of concern from the community, yet only ranked a a snoozer of a CVSS score of 5.0.

I’m currently working with a commercial data feed partner to match up our vulnerability corpus with a variety of data points that extend beyond the traditional NVD. First up are simple indicators for the ‘Metasploitability’ of a given vulnerability, namely whether or not a vulnerability is known to have attack code written for it in either private or public attack frameworks such as Metasploit, Canvas, and Core Impact. My intention is to use these indicators to create our own custom rankings of severity and feed these custom rankings into our analytics process to generate new risk scores and prioritization. With this information I can generate cost-benefit analysis of re-prioritizing our remediation efforts while generating markedly better improvements in our risk posture. Doing more with less (or the same)…what a concept!

So far I’m working on brushing off some rusty MongoDB and Python skills to pull the various bits of data in and clean them up (note: XML is never fun to work with…never). While there is a lot of work to be done, I was able to post the teaser elements on Twitter mentioned earlier which show some tantalizing possibilities. Areas that I will have to consider include compliance obligations (i.e. if PCI-DSS says that we need to patch vulns that are over a certain rating quickly, but we believe others are more important, how do we handle that?), the changing state of exploit code (can we determine how likely a vulnerability is to have exploit code published in the near future?), and some blending of internal data feeds between our vendors (we have some feeds of not just Metasploitability, but also whether a vuln can be used to pivot to other hosts).

I’m very excited to be working on a promising project that can deliver big returns for my organization. I’m looking forward to the results and sharing those results and our approach with the community. I love this area of taking the great work of others and showing how it can be practically applied by evidence-based defenders.

Note:Hat tip to @MrMeritology’s Exploring Possibility Spaceblog as the source of this post’s title.