Poor design security: the consequences
If they fail to preserve design security or create a root of trust in their systems, engineers developing high security/high reliability products face a number of threats, including data theft, data tampering, or altering system operation.
FPGAs based on static random access memory (SRAM) are especially vulnerable to IP theft and tampering.1,2
Because these devices cannot hold state when power is removed, a programming bitstream must be read into the device at start-up. It is a simple matter to intercept this bitstream and clone the design, a process in which a pirate copies a board design, captures the configuration bitstream of the SRAM FPGA, and then builds an exact replica of the system without having to understand any of the details about the logic contained in the FPGA
There are far-reaching consequences when design security is insufficient. Opponents can potentially compromise data or functionality of the system itself. Interference is much easier if the attacker can determine how circuits have been implemented. This can be done by simply reading the configuration bitstream and using FPGA design tools to reconstruct the netlist.
Design protection mechanisms
In an attempt to prevent the design from being reverse engineered from the all-too-easily available bitstream, some SRAM-based FPGAs include a cryptographic engine to decrypt the incoming bitstream.3
In these cases the bitstream is stored in the external memory in encrypted form instead of in plain text. To make effective use of bitstream encryption, at least the decryption keys that are stored in the FPGA itself need to be nonvolatile so the encrypted bitstream can be reloaded each time power is applied. In some SRAM-based FPGAs this is accomplished by keeping the SRAM key storage cells continuously powered using an external battery, but this proves unsatisfactory in many ways. The better SRAM FPGAs today now include a small amount of nonvolatile decryption key storage using antifuse technology. An attacker may choose to attempt to find the encryption key directly via side-channel attacks, however, then use that information to retrieve the unencrypted form of the programming file.
Antifuse and flash-based FPGAs are much harder to reverse engineer because, once programmed, the programming information is contained within the die. There is no bitstream to be read at boot time so the only way to recover the programming bitstream is to request it from the device. If the FPGA is not set to write out its bitstream, this method cannot be used.
Nonvolatile FPGAs can use antifuse or flash technology to take advantage of their inherent secure attributes. On top of this, a number of additional FPGA techniques can be used to maximize design security where it is needed.
Design security architecture
In FPGAs designed for security, flash memory cells can be used to permanently store all security settings and keys in the security segment (see figure 1). A number of keys can be used to provide increasing levels of design security based on the requirements of the application.
Figure 1: Flash memory (blue) provides a permanent repository to store all security settings and keys in the security segment.
To add another layer of security, designs can include a passcode that must be matched in order for any security settings or keys to be changed. A 128-bit AES decryption key is used to decrypt and authenticate incoming encrypted configuration data, which can be used to allow updates. This is particularly important for military and aerospace embedded systems that may be designed for lifetimes of multiple decades.
Additional security lock bits can be implemented to control which parts of the FPGA are writable and whether a passcode is required for access, among other things. This combination of settings provides a number of options to support different design security use models.