Cybersecurity officials working in the White House were actively plotting a murder. The Intended target? Passwords. Those pesky vermin that authenticate user identities. Jeanette Manfra revealed the high-level plot during our recent interview. Manfra is a director of risk and compliance at Google Cloud. Earlier in her career she worked as assistant secretary for cybersecurity and communications at the U.S. Department of Homeland Security. Prior to that she was on the National Security Council staff at the White House. We invited her to talk about Identity, Authentication, and Authorization, which we’re calling INZ for short, and the challenges they pose to building security. We also invited her colleague Bert Kaminski to join her. Kaminski, an in-house lawyer and director at Google Cloud, previously worked at Oracle. They had lots to say about how their company is toiling to improve security—for employees and customers alike—without introducing improvements that feel like work. And we did have a little fun with passwords, the security headache everyone loves to hate.
TAG Cyber Law Journal: When you think about identity, authentication, and authorization, what are the biggest challenges to strengthening security?
Jeanette Manfra: Similar to most security areas, there are technological challenges. There are also significant cultural and operational challenges. And the way that many organizations think about these three is built off of decades of evolution. Thinking about how we improve identity management, both from a technological and an operational standpoint, can be very challenging. The current methods are deeply embedded into how organizations operate—for providers and users. If we want to change that, we have to provide alternatives that cause less friction, because introducing more friction into the equation is not going to increase adoption. And then we have to think about how to change habits.
Bert Kaminski: This has become a big issue recently because of the vast increase of spear phishing. Some of that is driven by the ability of scammers and cyber criminals to scan the web, find identifiers of users, and then convert that into unauthorized access. Credentials are being compromised, passwords are being stolen, and users are being tricked by social engineering into giving up some of their authenticators. So this is the challenge to security.
TCLJ: It's almost impossible to have a conversation like this without talking about passwords. Are they doomed? Can they be completely eliminated? And is that your fondest wish? I mean, have you ever considered ways to murder the password?
Manfra: When I was still in the government, I was on the National Security Council and working for the White House cybersecurity coordinator, and he would always say, “How are we going to kill the password dead?” And this was several years ago. There have been a lot of efforts. It gets back to the technology, and operations, and the culture. The password started in a much simpler environment. And it made sense at the time. And then, as hackers got increasingly sophisticated, we said, “Let's just make it more complicated.” To the point where you've got like this 26 alphanumeric, crazy thing that you can't possibly remember. And then if you do remember, you just use it for every single service that you have, which of course undermines the whole purpose. I think we can live in a passwordless future. I think many people—from businesses to our grandparents—would love to be able to live in that future. Google has done a lot to try to get us there, as have many other organizations. In the last couple of months we’ve talked about how we're going to be automatically enrolling all of our users into two-step verification. Thinking about other things, whether those are biometrics, or phones, there's a lot of things that can be used in place of a password. I do think it's going to take us a long time. The concept of a very complex password is here probably for a while. I use Chrome Password Manager personally. It suggests a complicated password, I don't even remember what it is, but it's automatically logged into my password manager and stored there.
Kaminski: Passwords will probably be phased out, but before that they will be increasingly strong and increasingly encrypted. I'm just taking a look at the recent executive order that was issued [in May]. And there's a section that mandates that the federal government implement stronger cyber cybersecurity standards, which includes, among other things, multifactor authentication and encryption. So passwords being stored in an encrypted manner is going to be needed. But the whole point of a password is to identify a user in the system. And you'll never end up having a situation of completely decoupling authentication and identity. You need to have some sense of who is in the system, are they the right people, and are they in the right areas and doing the right things?
TCLJ: Let’s jump to the pandemic, and the fully remote workforces that we've been living with for quite a while now. How has that affected all of these security issues?
Kaminski: Certainly the pandemic has caused a rapid move of work from home. The Bring Your Own Device, work from anywhere at all times was a huge trend, of course, before that, but the pandemic really accelerated the process dramatically. McKinsey was estimating it would take a company around 22 months to implement the full work from home. And actually companies have pivoted into it in about 11 days. But that creates the challenge of multiple devices and multiple time zones accessing from all sorts of different endpoints at all sorts of times. So you're no longer within the confines of a corporate firewall, knowing who's in and who's out. When you have this heterogeneous way of accessing, it's much harder to ensure that you've got the right users. Companies have been adopting a zero trust approach toward security, which essentially means that you assume that everyone who's trying to enter is an attacker. That really has made the challenge of identity and access controls that much more prominent.
Manfra: We’re a pioneer in what is now called zero trust. We refer to it as Beyond Corp, which literally means beyond the corporate network. Zero trust can be confusing, because it's come to mean a lot of different things. But at Google, there was a key security insight before I got here. The location of your network doesn't provide you with any intrinsic benefits anymore. There was this notion of having a digital fortress, and everything inside is something or someone you can trust. But that corporate network doesn't give you inherent trust anymore. In addition to that security insight, maintaining productivity with a decentralized workforce and without the use of a VPN was also an important goal for Beyond Corp. Zero trust is strongly linked to identity, by the way. You have to ensure you have the correct mechanisms in place to appropriately authorize and authenticate individuals and assets. Many organizations were thinking about Zero trust or had already begun implementing it when the pandemic forced them to jump full body into it. And in many ways, because organizations were struggling to manage the VPN capacity they needed in order to have all of these users come in, they were trying to quickly figure out how to take legacy security and apply it to their full workforce. And they had no idea where everyone was connecting from. That's why we saw a lot of people trying to take a multiyear zero trust digital transformation and cram it into a couple of months.
TCLJ: As we think about this INZ issue in security, there's the workforce and the internal implications for a company. And there's also how you're dealing with your customers, your clients, your consumers. Do you see them as a separate set of challenges?
Manfra: At Google, everything we make available externally was first used internally. We're trying to eat our own dog food—figure things out, try to work out the kinks before we release it. The products that Bert and I are using internally are the same that we have or will soon externalize to others. For example, the identity authorization mechanisms and security tokens that we use internally are capabilities now that we offer through our Advanced Protection Program to all customers. Coming from my last organization in the government, where we were really just starting on our journey to the cloud, to an organization that has all of these zero trust capabilities built in—it’s an amazing experience. We haven't had a single phishing incident related to a password compromise since the introduction of the security keys.
Kaminski: There's one difference when it comes to identifying users who are consumers versus employees. Putting Google aside, for external users like consumers, a lot of companies validate identity through personal information, such as birthdays and social security numbers. You don't necessarily need to do that when it's an employee. So you have different kinds of credentials and IDs to validate an internal user, versus external consumers. Other companies tend to sometimes pick up and utilize personal information more than you would for an employee.
TCLJ: And I would assume that you're less worried about friction internally. I mean, it's part of your job, right? If you have to go through a little friction, fine. But you don't want to lose customers.
Manfra: There's significantly less friction here than I was previously used to. But as an organization, you have to calibrate. If you have highly sensitive information, then you're going to introduce more friction, and your users need to accept that if they want to work on this highly sensitive information. What I like about the way that Google has approached it, and other organizations as well, is we recognize that if we make it too hard, nobody's going to do it. There was a great analogy that I heard once. When thinking about people signing up for retirement plans, if you provide people an opt-in model, you get very low acceptance rates. But once organizations started automatically signing up new employees, suddenly you're getting 80, 90%. And you do the same thing with security. We need to make security automatic and invisible to the majority of people.
Kaminski: I'll just add that it's all about trust. Users may be willing to take the extra steps if they trust the system and the service. So when you're talking about a market-facing solution, people will utilize your service and buy your products if they feel that they're secure. And they may be more willing to do that if you show that it's a benefit as opposed to a burden.
TCLJ: Are there ways in which recent improvements in security have collided with requirements or desires for privacy? I note that Google is headquartered in California, and California privacy laws are changing rapidly and have taken the lead in this country, which doesn't have a federal privacy law.
Manfra: I see security and privacy as largely two sides of the same coin. The more security you can build in the system, usually the more privacy you are also building into the system. There are times, as you noted, where either through practice or through the way the tech works, you need information that some may consider private in order to achieve security outcomes. What's interesting about what's happening right now is the search for a definition of what is private information. And if you compare the U.S. versus Europe, there's a rich debate. You need to get specificity in order to be able to implement it on the technical side. Which specific types of data are personal or private? Should a user have a right to some privacy? And what’s the difference between a consumer versus an employee of a company—and how the company needs to be able to implement certain security measures? How much privacy should I expect as an employee?
I don't have perfect answers to all of these. A lot of what we're working on internally is, again, how can you have the best security while having the necessary guarantees of privacy? But as to the definition of what privacy means, you noted that there's not a federal privacy law. There's a patchwork to the extent that some states have it, and it usually deals with breach notification and things like that. You also have evolving concepts in Europe and beyond. But I do believe that it can collide, oftentimes when it comes into forensics. And when you want to be able to say, "OK, is the subject of the email private?" That's very useful for doing forensic analysis on spear phishing emails. Is an IP address private? That's very useful for security configuration. To me it's about defining and getting to a consensus on what a user and/or employee should have as a reasonable expectation of privacy. And then how do you realign our security practices and toolings to account for that?
Kaminski: There's not necessarily a trade-off. Privacy law recognizes security as being a key element. And this is why it's called “data protection.” You can't have privacy without the security element. Google is very committed to tracking these laws and providing security and privacy built into its products and services. That's fundamental to the DNA of what Google does. We want the best user experience not only from a performance standpoint, but from a trust standpoint as well. So we engineer privacy and security in it. And the data protection laws require that. Some are more prescriptive than others. As you know, there are certain state laws that actually talk about encryption, and others speak more about using reasonable security based on the circumstances and type of data. One last point. Although a California company, Google works to adhere to privacy laws that apply both in California and elsewhere.