In a world in which attacks on electronic systems can be conducted remotely, security constitutes a vital component of system design. Even systems that do not have to store confidential data now have to be designed with security in mind to prevent their core intellectual property (IP) from being copied and reused illegally. In these examples, we can see the two elements of electronic system security: design security and data security. Increasingly, the two depend on each other.
Design security focuses on ensuring that the core design is protected and that the security intent of the owner of the IP it contains is followed at all times. The design must not be subverted or changed without the owner’s permission. The design must not be revealed to those who do not have authorized access to that information. Finally, the design must not be used in other systems without authentication.
Data security involves security applications that the system may run to protect and authenticate data. In a tactical military radio, for example, the objective is to preserve the confidentiality of messages through the use of encryption. There may be secondary objectives such as checking that a received message came from the source claiming to have sent it and that it was not altered by a third party on the way. A data security application may be used to ensure that users do not have access to material for which they are not cleared, or to prevent counterfeit material from being used in place of authenticated content.
In all data-security applications, cryptography plays a key role and involves the use of secret keys and other data. The device must protect those secrets. It is thus practically impossible to have strong data security if the system’s design security is weak or has been compromised. Without effective design security, there can be no data security.
This conclusion is particularly important for users of programmable logic devices such as field-programmable gate arrays (FPGAs). In many cases, these devices make it possible to reprogram the functionality of a system in the field and to transfer circuitry and IP from one system to another simply by copying the configuration information. Without adequate protection against such situations, an FPGA cannot provide effective design or data security. As functionality is encapsulated in firmware and reconfigurable circuits that may be unlocked at runtime by providing a key, design security becomes increasingly important to data security.
The root of trust The security boundary is an important concept in maintaining secure systems. In many cases, the security boundary consists of the room or building in which the secure device is located. In the case of a computer center, for example, the walls, doors, and access controls provide the security. Increasingly, though, systems have to communicate with the outside world using the internet and may not be located in a secure location—a would-be attacker may have ready access to the hardware.
Today, more non-computer devices than traditional computers and servers are connected to the internet. This “internet of things” demands an infrastructure that allows the devices to communicate securely, since many of them will not be protected behind a firewall. In many cases, they will be acting as firewalls themselves, providing their own security boundary.
Such systems must be resistant to tampering, which leads to the concept of the root of trust. In any security system, it is important to have a root of trust: a secure location for storing keys and performing critical computations. Software, by itself, is not secure. It needs to run within a hardware root of trust to be secure. This, in turn, demands effective design security to ensure that attackers cannot glean secrets by altering the hardware design to make it possible to capture secrets or find information leaks from the system.
The root of trust can perform cryptographic operations that extend the trusted zone out from itself to include other elements of the system, even allowing secure communications across an untrusted network. The hardware root of trust may include a microcontroller that boots from trusted on-chip memory, for example, and then checks the digital signatures of any code that it loads from external memory. If the code in the external memory is authentic, then it can be considered part of the trust zone and executed.
An example is the smartcard integrated circuit in a digital set-top box. The smartcard extends its trust zone to the rest of the set-top box, making sure that only media content that has been paid for is played, making it difficult to play content that has been pirated. Another example is the digital phone that uses a subscriber identity module (SIM) to authenticate the phone to the network.
The trust zone concept is not only useful for protecting data, however. To guard against counterfeit hardware, software running within the trusted zone can interrogate other boards in a chassis by issuing challenges that only valid boards can respond to correctly. This can only be performed reliably if effective design security has been embraced to ensure that the root of trust cannot be compromised.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.