High-performance, high-capacity FPGAs continue to experience an exponential growth in usage, both in their role as prototypes for ASIC/SoC designs and as systems in their own right. These designs typically involve complex combinations of hardware and embedded software (and also, possibly, application software). This is resulting in a verification crisis, because detecting, isolating, debugging, and correcting bugs now consumes significantly more time, money, and engineering resources than creating the design in the first place.
The problem is that bugs in this class of design can be buried deep in the system and they may manifest themselves in non-deterministic ways based on complex and unexpected interactions between the hardware and the software. Simply detecting these bugs can require extremely long and time-consuming test sequences. Once a problem is detected, actually debugging the design requires a significant amount of time and effort. Furthermore, when verification tests are performed using real-world data such as a live video stream from a digital camera, an intermittent bug may be difficult (if not impossible) to replicate.
There are a variety of verification options available to engineers, including software simulation, hardware simulation acceleration, hardware emulation, and FPGA-based prototypes. Each approach has its advantages and disadvantages (Table 1). RTL simulators, for example, are relatively inexpensive, but full-system verification performed using this approach is extremely slow. One major advantage of software simulation is visibility into the design. Having said this, as more signals are monitored and the values of these signals are captured, the simulation slows still farther.
Table 1. Comparison of conventional verification technologies.
At the other end of the performance curve are FPGAs, which offer a significant advantage with regard to their ability to run at real-time hardware speeds. In the case of ASIC/SoC designs, FPGA-based prototypes are also relatively inexpensive as compared to hardware acceleration and emulation solutions. Until now, however, FPGAs have suffered from the problem of gaining visibility to their internal state and signals.
This paper first provides an overview of the various conventional verification options available to designers and summarizes the advantages and disadvantages of these different techniques. The paper next introduces an innovative, patented new technology called TotalRecall, which provides 100% visibility into an FPGA – including the registers, combinational logic, and memory blocks – while allowing the FPGA to be used at full real-time hardware speeds. When a bug is detected, the data from the TotalRecall technology can be used to initialize a software simulator with the state of the design hundreds or thousands of cycles before the bug occurred, and also to provide the test sequence that will take the design from this initial state to the bug.
Overview of conventional verification options
As an introductory example, let's start by considering the performance of a variety of software simulation techniques as compared to an FPGA-based prototype. This particular example involves the booting of a real-world cell phone design (Fig 1).
1. FPGA-based prototypes offer an extreme performance
advantage over various software simulation techniques.
In addition to requiring a testbench, even a high-capacity, high-performance RTL simulator took 30 days to boot the system. Similarly, a traditional hardware/software co-verification environment using an instruction set simulator (ISS) – which also required a testbench – took 10 days to boot the system. Meanwhile, a C/C++ simulation of the system brought the boot time down to 24 hours, but this form of verification provided only limited visibility into the internal workings of the system.
By comparison, an in-system FPGA booted the system in only three seconds. This means that the FPGA-based environment can be used to verify the system running under real-time workloads; also that this environment can be used as a platform for embedded and application software developers to integrate and verify their code in the context of the real system. The main problem with the FPGA – when used in a traditional verification environment – is lack of visibility with regard to its internal signals and state, including the contents of any memories. (Source: Applied Formal Verification, Douglas L. Perry, Harry D. Foster, page 34, Table 3.4. Copyright 2005 McGraw Hill.)
As we previously noted, software simulators are relatively inexpensive, but full-system verification performed using this approach is extremely slow; at the other end of the performance curve are FPGA-based prototypes, which are also relatively inexpensive and are very fast; in-between we find hardware-accelerated simulation and emulation, which are much faster than software simulation, much, much slower than FPGA-based verification, and much, much, MUCH more expensive than both. The end result is that FPGA-based prototypes give the best price/performance by far (Fig 2).
2. FPGA-based prototypes give the best price/performance by far.