Articles / Security Ethics for Robots

on 24 Sep 2018

I first heard about Twitter directly from Jack Dorsey. He and I were standing backstage about a decade ago in New York City, chatting about his new service. Hugh Thompson, now CTO of Symantec, was also there, and after Jack rushed off to prepare for his on-stage interview, I made one of the dumbest comments in the history of technology: “Hugh,” I said confidently, “I don’t see how that Twitter thing can be successful.” There – I admit it.

I offer this anecdote, because I will be making a bold technology prediction in this note, and you should be aware of my bumpy track record. I’ve made mistakes in predictive judgment the size of our galaxy, and such whoppers give me pause when I stand at home plate in the Bronx, pointing to the centerfield fence. So, consider yourself warned, and if you are sufficiently spooked by my deep insights into Twitter, then you’d better quit reading here.

For the rest of you, I have a prediction: Security for autonomous machines will be the Next Big Thing in our industry, and it will be a multi-billion-dollar industry. What I mean by autonomous machines, by the way, are robots, cars, and other physical entities that are programmed to control themselves without need for human intervention. (I tip-toed from the term autonomous systems, because the Internet routing community always has that one.)

Security for autonomous machines will be technically challenging and fundamentally different from today’s cyber security methods. Success factors will include the usual effectiveness and cost concerns, but a new source selection issue will emerge: Autonomous machines will be judged and selected based in part on the ethical framework of their manufacturers. Stated simply, you’ll want to know what sort of upbringing your robot had.

I use these anthropomorphic concepts with trepidation, if only because the great Edsger Dijkstra fined graduate students a dollar each time they violated this rule of judgment. But the analogy seems to fit – and the idea that an ethical framework should guide the design decisions of a manufacturer also seems natural. So, I hope you’ll forgive my use of humanoid references to non-living, non-breathing devices constructed of electronics.

I’ve recently developed a criteria framework for manufacturers to guide security and ethical design decisions for autonomous machines. Francis Cianfrocca, now with Fortinet, was a co-conspirator in this work. Reviewers from companies such as AT&T were also instrumental in adjusting the framework to something more useful than I’d originally envisioned. I also had excellent interactions with the security team from UL. They offered great input.

You can download the framework at https://www.tag-cyber.com/images/uploads/cyberexp/Cyber_Security_Framework_for_Autonomous_Machines.pdf, and it is offered for your free use. It is written in a NIST 800-53 style, so you can cut-and-paste it to any NIST-type assessment. I hope you will use the guide to program cyber security ethics into your new autonomous machines. For example, you must decide whether your intelligent machine should ever hack into its environment – and it’s easy to create cases where such action would be warranted (or not).

The specifics of the framework are described in the paper, so I’ll not go into much detail here. I’ve also included two sample assessments for real products from Dyson and Neato Robotics. I did not ask permission from either company, so I hope they are not angry with me. (Both companies are doing interesting work in autonomous machines, by the way – especially Dyson, which makes vacuum cleaners and is considering building . . . uh, cars.)

Please let me know what you think of my new cyber security framework for autonomous machines, and whether it is helpful to your work. I certainly hope so – and perhaps you might use that little Twitter service from Jack Dorsey (that I said would never succeed) to tell others about the new framework. Such advertising of my work would be poetic justice, I guess, for me, the great prognosticator.

Hope the frameworks helps you!