Never has network security been more important than it is today. Fast-moving worldwide political developments, an extended economic downturn and companies' growing reliance on their networks have combined to place an increasingly large premium on security. In this climate, the viability and vitality of businesses and organizations is only as secure as the networks they use.
The road to developing highly secure networks begins with the selection of system components " after all, each component represents a potential door to hostile attack. Yet a major transformation in network equipment design has occurred recently and its implications for network security have gone largely unnoticed.
Over the last few years, field-programmable gate arrays (FPGAs) have quietly assumed an increasingly pivotal role in network equipment design. Not long ago, these programmable components were primarily used in glue logic applications. Limited to relatively small densities, FPGAs were typically employed to interface between application-specific standard parts and custom application-specific integrated circuits (ASICs).
However, as silicon manufacturers have raced down the process path to deep submicron technologies and FPGA densities and clock rates have skyrocketed, the role of these programmable components has changed radically. With mask costs exploding and chip designers facing formidable I/O bond pad limitations, the number of ASIC designs has dropped dramatically. Today, system designers use faster, denser and more cost-effective FPGAs to perform many of the key system functions historically relegated to ASICs.
Unfortunately, few designers of communications equipment understand the security implications of the FPGAs they select. Today, designers can use FPGAs based on any of three very different technologies.
SRAM-based FPGAs represent the largest proportion of the market. However, they also represent the least secure of all the FPGA architectures currently in use. Based on a volatile memory technology, these devices must be initialized or configured on power-up. Typically, the FPGA is initialized by loading a bit stream from a PROM or sent to the FPGA via an on-board microcontroller. While this attribute makes SRAM-based FPGAs easily reprogrammable, it also presents major security risks, such as denial of service, cloning, reverse engineering and overbuilding. With SRAM-based FPGAs, anyone seeking to replicate the intellectual property (IP) used in the design of an SRAM-based FPGA can do so by simply intercepting the bit stream from the processor or copying the boot PROM.
FPGAs based on alternate nonvolatile technologies, such as antifuse or flash, present a more secure solution. Unlike SRAM-based FPGAs, these nonvolatile devices do not require a bit stream on system power up. Instead, they can be configured before they are shipped to the end user.
The one-time programmable antifuse architecture presents significant obstacles to any pirate attempting to use a classic reverse-engineering strategy. Antifuse FPGAs use a small piece of dielectric, usually smaller than 1 micron square, as an open switch between two metal lines. To physically identify connections between two metal lines, a pirate must deprocess or cross-section the device and evaluate each link with a scanning electron microscope. This method requires an extremely time-consuming trial and error process just to locate a single link. Moreover, given that a typical antifuse device features an extremely large number of switch elements, often numbering in the millions, reverse engineering a design is usually prohibitively costly and time consuming. Finally, the links are so small and fragile, a pirate runs the risk of destroying the short when layering or cross-sectioning a chip.
For applications that demand a higher level of security than SRAM-based FPGAs, but must also support reprogrammability for hardware upgrades, flash-based FPGAs also offer a highly secure alternative. Like antifuse FPGAs, flash-based devices are intrinsically nonvolatile and can be configured before they are shipped from the factory. Moreover, once they are programmed, a flash-based FPGA remains programmed until the user changes it. Since the device does not require an external bit stream, there is nothing that a thief can easily copy.
The latest flash-based FPGAs now offer security features that make a noninvasive attack even less likely to succeed. A user cannot read or alter the contents of the device without access to a user key that, in today's devices, is up to 263 bits long. Any brute force attack to randomly determine the key would require the thief to try each possible key combination in succession. Since the key cannot be loaded via the JTAG port at a rate faster than 20 MHz, this methodology could take billions of years. And it is not possible to trick a flash-based device into a test mode by varying programming voltages or sending a specific code sequence.
Much like antifuse-based FPGAs, flash-based devices are also highly secure from invasive attacks because the programming elements in the device are so difficult to read. Decapping a flash-based FPGA reveals only the structure of the device, not its contents. Flash-based FPGAs use switches to connect and disconnect intersecting metal lines. A single floating gate is charged or discharged to set the state of a switch that connects the two metal lines. Since no physical change occurs in the programming or switch device, there is nothing to detect by material analysis. To reverse engineer such a device an engineer would have to determine whether a charge is present on each configuration transistor's floating gate. That process would require extremely sophisticated equipment and considerable time. And once a thief succeeded in determining the overall transistor layout and the locations of the programmed transistors on the chip, he or she would still have to translate that pattern back to a configuration bit stream that could be used to configure another part to create a clone of the design. Reverse engineering the design would be even more difficult, requiring an engineer to map the bit pattern into the physical structure of the device to generate a schematic of the part.
Designers building communications systems today face two distinct security threats. The first is IP theft. IP represents the competitive advantage companies derive from developing highly complex, proprietary designs. Companies who suffer from IP theft face loss of profits and ultimately loss of market share. Today, more often than not, the key IP that differentiates a system from competitive offerings is housed in programmable logic.
The scope of this problem has grown dramatically in recent years. The International Anti-Counterfeiting Coalition estimates that U.S. companies lose hundred of billions of dollars each year to worldwide copyright, trademark and trade secret infringements.
One of the most common strategies used by IP thieves to steal IP embedded in FPGAs is called run-on fraud. Typically, this strategy is used by unscrupulous assembly houses that overbuild a design and then earn additional income by selling the extra components to gray market importers. The additional devices typically end up in finished products that are indistinguishable from the originals. One way to limit exposure to this ploy is to use a secure programmable logic device technology, perform all programming in-house and then supply only programmed devices to the contract manufacturer.
Another common way IP is stolen is through reverse engineering or cloning. Reverse engineering strategies have grown increasingly elaborate over the years. A thief copies a design by essentially reconstructing a schematic-level representation from the original physical device. This allows the thief to discover how the design works and in some cases improve on its performance. In some sophisticated cases, a thief will employ lasers or focused ion beams to attack a particular part of a chip or use chemicals to etch back the silicon layers of a chip.
Cloning is the simple copying of a design. Typically, the thief does not know how the design functions, but merely gains access to its details. SRAM-based FPGAs are particularly susceptible to this threat. By copying the boot PROM or intercepting the configuration bit stream from the onboard processor, a thief can easily recreate the design without intimate knowledge of how it functions.
The second major security risk in communications systems is data security. Historically, this has been primarily limited to military and financial applications. But as companies ship more and more of their confidential corporate data across networks, it has become a major concern in consumer applications as well.
Most companies go to great efforts to protect the front door of their networks by implementing elaborate firewalls and other security measures. What many designers do not understand is that FPGAs represent a potentially vulnerable back door to their secure networks.
The classic strategy used to protect data in networks is to employ highly sophisticated encryption techniques. The Data Encryption Standard (DES), developed by IBM and the U.S. government in the 1970s, uses a 56-bit private-key-encryption algorithm to protect data. Virtually all leading FPGA vendors offer encryption cores to implement DES in their products, but critics have for many years questioned whether DES provides adequate levels of security. In some cases companies have demonstrated how easily the algorithm can be cracked. Currently, the U.S. government is going through a process to replace DES with an effort called the Advanced Encryption Standard.
Systems using reconfigurable FPGAs are highly susceptible to attempts to circumvent encryption efforts. For instance, by intercepting a bit stream in an SRAM-based FPGA a thief could defeat an encryption mechanism. In a worst-case scenario, that information could later be used to defeat future encryption efforts. Systems using reprogrammable flash-based FPGAs present a much more secure solution.
While the theft of data is a major concern, the most common form of attack on data security is denial of service. In these attacks, the pirate does not seek to steal data, but to deny use of the network to users. One hears repeatedly news reports describing how hackers shut down network services by flooding a network with messages.
Networks that rely on SRAM-based FPGAs to upgrade system hardware are particularly vulnerable to denial-of-service attacks. By gaining access to the FPGA's bit stream, an attacker could corrupt the FPGA and bring down the network. A more sophisticated hacker could create even greater problems by reprogramming the FPGA to take control of the hardware. For example, in a telecommunications network a hacker could alter the billing algorithms so that selected customers are not charged " or attackers could place a virus into the FPGA and then disseminate it across the network. In each of these cases, systems using SRAM-based FPGAs to upgrade hardware are significantly more vulnerable to attack because the bit stream used to initialize the device is unprotected.
Given the geopolitical and economic conditions prevalent today, network security concerns are more important than ever. Yet few network equipment designers are aware of the security implications of the parts they use in their systems. Carefully choosing the right FPGA can have a major impact on the designer's ability to protect valuable IP as well as ensure the integrity of the data in the system.
Jon Ewald is director of product marketing at Actel Corp. (Sunnyvale, Calif.).