I recently became aware of a slightly mischievous paper written by one Sandy Macadam, an obvious pseudonym. The anonymous paper makes the reasonable case that the use of field programmable gate arrays (FGPAs) for hardware is an effective means for optimizing the security of any platform. Below is my outline of the paper, developed to support my own learning. I hope my notes are useful to you.
Fact: Preventing cyber threats requires trusted platforms that cannot be compromised. Reliance on well-designed hardware is a useful means to this end, because its functionality can be contained and minimized. But hardware is less flexible and more expensive to maintain than software, which benefits from the familiar Turing-type programmability that enables modern computing.
A non-Turing solution dubbed hardsec makes the case that hardware and software have their respective strengths, but that the benefits of hardware for cyber security have been overlooked. Specifically, hardsec proposes use of field programmable gate arrays (FPGAs) to gain the protection advantages of constrainable hardware, while preserving the programmable flexibility of software.
The primary weakness of any Turing-type system is also its greatest strength. That is, the three legged stool of CPU, memory (OK, it was originally a tape), and input/output in a Turing machine permits both desired and undesired programs to run. Viewed in this manner, high assurance computing is nothing more than constraining possible executions to ones that are considered expected and acceptable.
Achieving high assurance is not easy: The Dijkstra-approved process of verifying software using formal methods has been tedious and unpopular. And the support layering of modern computing architectures makes it impossible to verify every implementation level of abstraction for a non-trivial system. That is, even if your software is perfect, the OS might not; and if the OS is perfect, the CPU might not; and so on.
Hardsec proposes a new method to achieve high assurance computing. Namely, it attempts to minimize the consequences of an attack by reducing the range of functions that can be implemented on the targeted hardware. Specifically, by executing only limited functions on a non-Turing machine implementation, malicious intruders have access to a weaker and more narrow set of targeted computing functions for mischief.
Theorists have studied these types of computing questions for many years. The foundations of automata theory established in the 1960’s (including contributions from this author’s father such as the following cool paper) defined the languages and expressions computable by different machine types. So, this is hardly a new area of theoretical computer science. It is instead a mature aspect of our discipline.
Hardsec makes the case that a platform must support some set of desired computing functions, then it is good cyber security practice to minimize these functions. In other words, the security designer should not expose powerful functions to minimize cost through a one-size-fits-all implementation. Rather, minimizing the functional power of trusted computing base is a superior process. #OrangeBook.
Modern powerful processors based on x86 and ARM, for example, do try to address functional minimization through features such as ring protection. But these commercial processors are inflexible when problems are found. Hardware patches are inconvenient and sometimes impossible. So, relying on well-designed processors is not enough for high assurance security.
Hardsec suggests use of FPGAs. The goal is to create non-Turing platforms that cannot be reprogrammed by intruders to cause cyber threats. In case you’ve been away from your computer architecture text for some time, recall that FPGAs are integrated circuits that support re-programming post-manufacture. A big security advantage is that such re-programming is inconvenient, generally requiring physical access.
Herein lies the hardsec thesis: By deploying security onto an FPGA implementation, security engineers can update and reprogram platforms if vulnerabilities are found. But intruders who exploit a given platform constructed on FPGAs will find that deployment of malware and other exploits cannot reprogram the underlying system. Such attacks will find an unwilling underlying execution environment.
Hardsec acknowledges that cryptographers have long recognized the importance of minimizing underlying functionality, often through use of FPGAs. Government-grade crypto devices are designed in this manner. But what's novel in the hardsec solution is the author’s proposal that FPGA-based implementations are more generally useful to avoid cyber threats. It is not just cryptographers who can benefit from the minimized functionality.
The author – er, Sandy Macadam – acknowledges drawbacks for programming hardware, but suggests that hardware description languages (HDLs) can help. Work cited in the UK government (hmmm, perhaps Macadam is a Brit) offer hints about verifying data being input to hardware. The author also argues that verification logic is possible for deterministic finite automaton using state transition tables. (Perhaps esoteric, but interesting.)
A final argument is that software exploits can re-program interfaces such as APIs between modules. With hardware, however, the likelihood of physically rewiring the interfaces between hardware components is zero. This argument might seem old-fashioned, perhaps even clumsy – but is nevertheless sound: I cannot find any logical problem with the argument that software exploits cannot move physical wires.
And so, the paper concludes with the tactic recognition that computing can and should continue to be built on the most powerful and flexible underlying computational models. But for cyber security functions, perhaps the more constrained solutions inherent in an FPGA-based system, might be an excellent way to reduce risk. Regardless of whether you agree, this is an interesting theoretical point, one that deserves debate.
Implications: If you accept hardsec, then you should demand that your security vendor provide justification that their underlying hardware exactly matches its obligation – and nothing more. This increases the attention required in a hardware base, but might also increase the cost of the platform – so this must be addressed as well. Ultimately, the goal should be to avoid the massive attack surface introduced by a generally programmable base.
Read the original paper. And then let me know what you think.