Articles / Intelligent Cyber Metrics

on 07 Apr 2018

A naughty little book I remember enjoying back in graduate school (no, it’s not that) is Darrell Huff’s “How to Lie with Statistics.” Published 2ˆ6 years ago in 1954, the book explained how to twist metrics for twisted purposes – sort of a foundational principles guide for producing fake news. Accordingly, the author used the technique to help cigarette companies lie to Congress in the Fifties and Sixties. (By the way, when I heard that story, I tossed my copy in the can.)

Speaking today at the 2018 New York CISO Executive Leadership Summit sponsored by HMG Strategy, Aleksandr Yampolskiy delivered a rousing talk, one that evoked the spirit (the good part) of Huff’s book. Yampolskiy, who serves as Founder and CEO of Security Scorecard, provided a useful taxonomy of common tricks used in modern business to produce cyber metrics that might either knowingly or unintentionally mislead the recipient.

“It is not uncommon,” he explained to attendees, “for claims to be made by cyber security professionals that report posture in a way that is simply not consistent with the actual data.” He went on to offer examples in which this is done – couching his explanation in the context of the arguments made by CISOs and other cyber managers to make their case. Below is a sampling of some ill-advised metrics-based arguments:

The Incomplete Picture Argument. This case involves giving the “what” in a metric, without supplying the associated “so what” context. “A security manager might claim to have seen an average of two million attacks per week,” Yampolskiy said, “but without giving the associated context, it is impossible for recipients of the claim to determine if this is good, bad, or something in between.”

The Opinion Survey Argument. This case involves using data from user inquiries to make some seemingly related point. For instance, asking a user base whether they might “feel safer” from a given cyber security deployment says nothing meaningful about the actualrisk reduction of the deployment. This is an important point, because asking users about security is a common technique used in modern business today.

The Bombardment Argument. This case involves blasting so much data across a reporting interface (sound familiar?) that it becomes virtually impossible to determine what is really going on. “We often see pie charts, graphs, and alerts thrown at us in reporting screens,” he explained, “to the point where there is so much data to interpret that important events are missed in all the noise.”

Despite these and several other common misuse-cases for cyber metrics, Yampolskiy was upbeat about the prospects for correct quantification of security reporting. He explained, in the context of his Security Scorecard offering, that so long as four basic considerations are covered adequately in the use of metrics, the use of quantified data and representational scoring of posture can be done effectively.

First, the data must be relevant. Yampolskiy made the excellent and refreshing case that “less is more” in most cyber security reporting metrics. He cited several practical examples of how the Security Scorecard rating system employs this principle to ensure a high degree of relevance for users. Just about every one of us in the cyber security industry has been subjected to a barrage of useless posture data. This must be avoided.

Second, the data must be objective. Yampolskiy explained that subjectivity has no place in cyber security reporting – and he is correct. Avoiding opinion might be hard for many in our industry, unfortunately, because so many factors are involved in presenting and interpreting data. Many budgets, for instance, are directly influenced by security posture metrics, so the temptation to inject subjective concerns is high. This also must be avoided.

Third, the data must be substantive. This carried the interesting corollary that in many cases, it might be better to be silent if nothing useful can be claimed. “No metrics is better than empty metrics,” Yampolskiy explained, and this is exactly right. He went on to give examples of so-called “vanity metrics” that are included in a report or presentation for little more purpose than to make an unnecessary or irrelevant claim.

And finally, the data must include context. This resonated perhaps most strongly for me, since it has been my own experience that avoiding scope and context is a technique used over and over by presenters to distort their claims. Every time you hear a massive number – and this can be number of attacks, size of DDOS, or whatever – you must demand the context. In many cases, this can diffuse metrics that might be otherwise considered shocking.

My thanks to Aleksandr Yampolskiy for providing such a clear representation of metrics in the context of cyber security. And I must say that while he was careful to not spend time marketing the Security Scorecard tool, I feel no such constraint here: I strongly recommend that you get in touch with Yampolskiy’s fine team and learn more about how they intelligently use statistics to score the security posture for third parties and other teams handling your data.

Let us know what you learn.