Addressing Churn in Vulnerability Management

This week, I’ve been reading Thomas Friedman’s newest work, Thank You for Being Late. His book chronicles the dizzying pace of accelerating change in technology and communication. Friedman makes the convincing point that because things are advancing so quickly, humans are having trouble keeping up. It was this issue of acceleration that was on my mind as I chatted with my friend Larry Hurtado about the impressive vulnerability management platform his team at Digital Defense has been developing for the cyber security community.

Many of you will be familiar with Larry’s team from the fine security assessments and penetration testing work they’ve supported across our industry for many years. But what you may not know is that Larry and his team have spent considerable time studying the pros and cons of periodic security assessment of computing and server environments. What the team found in their detailed research can be summed up in one word: churn. And it is this issue of churn that drives the design of their vulnerability management platform.

It is no surprise that server environments experience constant change and that periodic audits are often blind to administrative work performed between visits from the assessment team. But the Digital Defense team studied their data more closely and concluded that over 4% of a typical server environment will experience significant relevant change over a 90-day period. This is a dramatic finding – one that exceeds what I would have expected. (The report is available at https://www.digitaldefense.com/wp-content/uploads/2015/08/Network_Host_Reconciliation.pdf).

It is this high rate of server churn that drives the technical approach to vulnerability scanning that Digital Defense has integrated into its Frontline Platform. The idea is that continuous scans can be performed with special automated tools designed to manage churn. These tools use a clever fingerprinting algorithm that examines the value of parameters such as host, address, naming, and other configuration items associated with servers in a given state. The fingerprinting supports snapshots of what is considered normal in an environment across a continuum of advancing states.

The secret sauce for enterprise security teams is that by integrating this succession of server environment fingerprints with all-source threat feeds and hooks into the underlying information technology (IT) server management platforms, they can improve their interpretation of administrative changes in a server environment. This is good news, because as any security expert will attest, more accurate interpretation of changes by system administrators will reduce the false positives that make scan response such a challenge. Larry estimates that false positive reduction can reach nearly 40%.

The broad implication is that while security assessments and penetration testing will always have their place in cyber security, particularly to benchmark organizational risk, their utility for continuous vulnerability management cannot match the capability of an automated platform. Furthermore, by using intelligent means to measure and interpret server changes, the security team will see the benefits of continuous assessment, intelligent interpretation, and reduction in response work efforts.

Most enterprise security teams are not surprised by dramatic accelerations, such as those described by Thomas Friedman in his book. Familiar advances like DevOps, for example, have already begun to speed up the equation in cyber security. But for vulnerability management, enterprise security teams are advised to deal with acceleration through intelligent vulnerability management tools that account for the inevitable churn that comes with supporting a dynamic server environment.

The good news, as Larry Hurtado explains so clearly, is that platforms are available and that churn can be managed effectively. Make this task one of your new priorities in 2017.