Let’s start with CNN – and no, not that CNN, but rather convolutional neural networks. This class of deep neural networks is the workhorse for computer vision, and is one of the underlying forces behind recent advances in artificial intelligence. Their design is inspired by the image processing done in the visual cortex of the animal brain. When you see one of those cat recognition demos by an AI system, a CNN is likely doing most of the work.
Many cyber security researchers have tried to apply the tenets of AI – and computer vision – to the common problems addressed in cyber security: Detecting malware variants, identifying zero-day attacks, and the like. And while some companies have seen reasonable commercial success with their AI tools, a roadblock to faster progress is the unavailability of sufficient volumes of learning data about real vulnerabilities and attacks for training.
In contrast, the ImageNet project initiated a decade ago by Stanford’s Fei-Fei Li, has published 200,000 categories (including cats) of over 14 million images for AI researchers. Nothing of this scale exists for cyber security, because businesses won’t report their embarrassing security boo-boos. (Warning to America: China pays no mind to this middling privacy issue, and will soon develop AI recognition for security that will outpace everyone.)
What we have, therefore, in AI for cyber security is a plethora of siloed efforts, most by tech start-ups who find an acceptable, but never optimal, collection source to train their product models. This helps us get by, but let’s face it: Cyber security is behind in AI. And as object recognition evolves to situational interpretation, security AI (except perhaps in the Chinese military) will have trouble keeping up. We need inspired leaders with new ideas.
When computer vision is applied surgically, however, to cyber security problems, the results can be encouraging. I spent a wonderful afternoon chatting with the principals of security start-up Pixm recently, to learn how they are applying computer vision technology to cyber security and, specifically, phish detection. I can report that their work is creative, and suggests a new angle for the use of computer vision in security. Here is what I learned:
“We use artificial intelligence to visually recognize fake websites,” explained Chris Cleveland, Founder and CEO. “The solution is endpoint-oriented, rather than purely cloud-based, because we’ve learned that detection of phishing requires an end-user vantage point. Many of the phishing use-cases we’ve examined cannot be identified by a man-in-the-middle techniques performed in the cloud.”
Cleveland had the inspiration to apply computer vision to phishing while studying as a graduate student at Columbia University several years ago. With his interests in machine learning, his timing was good, because advances in deep, distributed neural network algorithms, as well as leaps in processing capability, created many new opportunities for researchers. And thus the spark was lit for Pixm.
“Our first prototype was designed for the cloud,” he explained to me. “And it validated our views on detecting phishing, which we knew was the majority of breaches. But computer vision technology was becoming more real-time, and our instinct was that we could develop an endpoint agent that could peruse and recognize attacks visually via screenshot after the phishing page has been served to the user. You cannot do this with a cloud system.”
Pixm technology is best understood in the context of browsing web pages: If a user opens an authentic website, the Pixm agent provides evidence using visual scanning that the page is fine. If that user opens a fake website, however, then the Pixm agent recognizes this using its computer vision technology and quickly shuts down the site. This approach is different than traditional blacklists and promises a more dynamic solution for zero-day attacks.
The Pixm agent includes browser extensions for Chrome and Firefox which are installed by the user (the company offers personal and enterprise editions). Users then go about their normal activity, and if Pixm “sees” something amiss in a served pages, using the verified page as a guide, then it provides a warning to the user. If things look bad, however, then the Pixm software blocks the page and disables the ability for users to click on links or add text.
As with many creative innovations such as Pixm, the technology will inevitably evolve to produce increasingly good results. I did not test the software per se on my own browsers, but my instinct, having looked at solutions such as this for decades, is that it will provide an excellent additional tool in the complement of security used to protect users. Blacklists, for example, do add value, and could easily be integrated with this new capability.
I love the idea of using computer vision to identify phishing, and I will keep an eye on Pixm as it evolves and improves its product through usage. Back in the early days of intrusion detection – and I was there – many emerging algorithmic techniques that we take for granted today, could be viewed as either glass-half-empty or glass-half-full in terms of their early performance on live data. With Pixm, I prefer to see this as glass-half-full.
Make sure to spend time with this creative team. Ask to see their solution demo and ask Chris Cleveland to take you through the story of his fine company. I’m betting that this visual detection technique will evolve into something that we will assume to be present in our ever-growing arsenal against phishing. And, as always, please be sure to share with us what you’ve learned.