Building Security Foundations

Like most columnists, I try to publish articles that people will actually read. An advantage of social media in this regard is the availability of metrics, also known as clicks, that rank the popularity of different pieces. For example, I posted an article on using AI to improve your SOC, and it got nearly two–thousand LinkedIn clicks. In contrast, a nice piece I composed on revitalizing NAC got about a tenth as many hits. Same author. Same clever wit (ahem). Very different results.

Cyber marketing experts review these types of results carefully, and generally conclude that the flashy, new stuff will draw in more customer eyeballs than the stable, old stuff. A real challenge, however, is that the relative security effectiveness of these different options does not always correlate. In fact, one could probably make the case that the older security stuff is usually better. (Remember: A method doesn’t get to be old unless it works!)

I had these thoughts in mind during a discussion with David Meltzer, CTO of Tripwire. I’ve spent considerable time with the company in the past months, and I’ve been impressed with their willingness to focus on foundational cyber security – even if it means being less flashy. So, where many modern vendors tout their adaptive-this and machine-learning-that, Tripwire tends to talk about the management of configurations, logs, files, and vulnerabilities.

I asked David to share his personal observations, views, and predictions for how these basic security controls translate into reduced cyber risk for the modern corporate enterprise. Nearly every aspect of his responses lent to the notion of foundation controls – which I think are missing from most existing corporate IT and network architectures. Below is a summary of my Q&A exchange with David:

EA: Let’s start with trends. What are you currently seeing in enterprise security and compliance platforms and solutions?

DM: We are seeing massive adoption of three basic themes: Use of public cloud, adoption of DevOps, and the use of containerization in application development. For most large enterprise, their future will be hybrid; that is, environments combining physical servers, virtualization, and both public and private cloud. Visibility and the implementation of a consistent set of security controls across these systems will be needed to maintain strong security postures in this new mixed environment. More organizations will continue to adopt DevOps practices, and security teams will need to try to keep up with new processes and technologies that introduce different kinds of risks and challenges. Containerization is an especially interesting trend to follow in terms of security. Maintaining visibility of containers and their contents can be challenging, as they tend to be numerous and change often. Security teams will need to keep up with their DevOps teams to implement proper foundational security controls on the contents inside those containers. There’s been good progress in this area but we’ll see this continue to evolve.

EA: How does configuration management factor into the enterprise security ecosystem?

DM: Misconfigurations, many of them easy to correct, have been the underlying reason for many successful breaches. Secure configuration management (SCM) is the control that assures systems are set up and maintained in a way that minimizes risk while still providing the essential business function of the system. Maintaining configurations is so vital to an organization’s data integrity that just about every security framework and compliance regulation related to security calls for SCM. While SCM can seem simple in a small organization, it’s quite complicated for enterprises that operate larger, more complex technology environments consisting of numerous systems, asset owners, and applications – all with differing configuration states and business requirements. For this reason, enterprises would benefit from technology that automates the assessment, monitoring, and management of configurations across all systems to ensure ongoing security and compliance.

EA: How about file integrity monitoring – how do CISO teams provide for this important control?

DM: These days file integrity monitoring (FIM) might be more accurately described as “system integrity monitoring” – which is a fundamental and foundational security control because it answers the key question: Are systems still in a secure, trusted state, and if not, what changed? What we commonly refer to as FIM has evolved quite a bit over the years. I think of it now as a broader process, not just about monitoring changes for files but also the integrity of registries, databases, and applications. FIM has also evolved to go beyond just getting visibility of the changes. A good FIM or system integrity monitoring program should also then be able to sort through and prioritize those changes to help you build an actionable workflow for addressing them. For example, is a change introducing risk or non-compliance? Does that change go outside the established organizational or regulatory guidelines?

EA: Do most enterprise teams deal with vulnerabilities in a proper manner? Do they need automated support?

DM: On-going exploits of known vulnerabilities show that vulnerability management (VM) is still a challenge for many organizations. Most large organizations have some form of VM in place, but generally there’s a lot of opportunity for improvement. We see a lot of VM programs demand time and manual effort from their teams, so it’s a matter of VM programs maturing and incorporating more automation. VM can be hard to tackle when dealing with data overload and relying on slow and error-prone manual analysis. Some specific questions to answer when maturing your VM program include the following: Are you scanning everything that needs to be scanned? Where are you deploying your scan engines around your network? Are you using credentialed scans? How quickly are you able to remediate? How efficiently and accurately are you able to prioritize risks? Do you have the right metrics? How many of your assets you are scanning? What is the effectiveness of remediation? How well is vulnerability information being communicated? Are the asset owners aware of the findings? Are there executive dashboards available for upper management? Is the SOC getting this information? Would your IT service management team can benefit from knowing your vulnerability state? These are numerous questions, but they are all vital to proper VM.

EA: Are security logs managed properly in the enterprise?

DM: Sifting through mountains of log and event data can get overwhelming. In today’s environment, what you really need is log intelligence, with security analytics and forensics for rapid response. Although almost every organization we work with has some log management system in place, there’s often a lack of actionable information coming out of those systems to help reduce risk or prevent breaches. Although just collecting the logs may be a valuable way to improve compliance, organizations should explore use cases that will help reduce risk and enable them to proactively identify potential issues.