Six years before Neil Armstrong set foot on the moon, the computing world celebrated as IBM deployed an air defense system called SAGE (Semi-Automatic Ground Environment). This early network, which included nearly two dozen nodes, each with production and backup computers, was tasked with using radar systems, air traffic control, and satellites to detect Russian bombers. It was arguably the first computer network to demonstrate the power of communicating nodes to accomplish a difficult task.
Decades later when PCs emerged, networks began to evolve in many directions. Enterprise networks, for example, were mostly internal. Businesses did not require the level interconnectivity found on earlier systems such as SAGE, and as such, many network operators had a difficult time determining how to manage the boundary between internal and external—there was no established protocol to follow. Nevertheless, the self-contained localized networks they ran were quite an achievement—a digital transformation, if you will—and they served companies well.
Still, everyone knew that corporate systems needed protection, and so the discipline of network security was born. Network security at the time was built on the fact that offices were centralized and that a perimeter could be established around the computers in the network. The firewall was invented, and the technology worked well for many years. Soon, though, business leaders started to recognize the benefits of globally distributed offices. The perimeter began to dissolve, which—combined with the dreaded cloud—sent security practitioners into a panic.
Suddenly, the network meant something different: Hardware assets were not necessarily on-premises. Data could be anywhere. Software was everywhere! The potential network attack surface was so large and so distributed that perimeter controls alone were ineffective. The attacker could already be inside the network. Thus emerged one of the central questions in cyber security today: How can an organization keep track of its network amidst perimeter change and software explosion?
In 2004, the San Jose-based cyber security company RedSeal was founded to address exactly this modern problem of dealing with protection challenges on the ever-expanding corporate network. Their insight was that companies, and even dedicated IT and security departments, didn’t understand what their networks looked like. If the problem was bad in 2004, they surmised, then wrapping one’s head around the scope of devices and access would be a hundred times more complicated in 2019.
During a recent call, RedSeal’s Chief Product Officer, Kurt Van Etten, referenced an enterprise challenge that is too familiar. He shared with Ed Amoroso and me that maintaining and understanding one’s network asset inventory, both hardware and software, is the key to maintaining a strong cyber security program. It's not sexy, and not what gets the most attention in media or at conferences, but companies must know what they have, where it is, and who has access.
This same basic premise spawned the Center for Information Security (CIS), a now-expert authority on building a fortified cyber security program, just a few years earlier and the first top 20 CIS Critical Controls a few years later. Organizations of all shapes and sizes today rely on the CIS Controls to guide their security programs. “Unfortunately,” said Van Etten, “companies still struggle to keep up with knowing what they have, how it’s connected, and what’s at risk."
That’s where RedSeal comes in: Their platform is deployed as a physical or virtual appliance to ingest input from firewalls, routers, switches, load balancers, and vulnerability scanners. The inventory phase allows companies to understand the scope of their networks and endpoints, vulnerabilities (based on scanner data), assets, access paths, and how attackers could exploit those paths to access critical information. From this data, the platform develops a network model.
Van Etten explained that the RedSeal output differs from a traditional network map. The user can query the RedSeal model to develop context and prioritize vulnerabilities. Importantly, the technology can see across public and hybrid cloud deployments and determine if proper network security segmentation exists, processes that remain a challenge for many organizations. Without this visibility, any organization will struggle to understand the depth and breadth of their infrastructure and use that information to build prevention, detection, and mitigation plans.
This is why the CIS Controls have been used for so many years by security practitioners, and why, although elements of their list have changed as networks, threats, and connectivity types have changed, hardware and software asset inventory has always remained at the top. This is also why RedSeal focuses their product on telemetry from layers 2, 3, 4, and 7, and across VPCs, virtual networks, security groups, and tenants. It’s an impressive list of asset data they collect.
As network and device types expand and communicate within larger circles, security professionals will need a reliable way to inventory and monitor communications and understand the anomalies that might occur. They will need to validate that appropriate policies are implemented, and that those policies are functioning as intended (critical for both security and compliance purposes). Such work is perhaps not the most innovative work that operators and admins will do, but it might be the most important.
If you were to ask any IT or security professional not currently using a tool like RedSeal, this question: “What does your network topology look like right now?” they’d probably look at you like you were trying to land the first space shuttle on the moon. For this reason, I recommend you reach out to Van Etten and the RedSeal team. Ask them to explain how you might use their fine solution to better understand the inventory of your own network.