Functional verification is a major challenge for electronic designers today. Total system complexity is growing as more functionality is integrated to differentiate products, including analog/mixed-signal content, embedded processors, and their respective software.
With this increased integration, design size grows, verification complexity skyrockets, and the number and length of tests increases. When things go wrong, it's more difficult to find out why, and the cost of late changes or even re-spins is prohibitive.
To meet these challenges and reap the rewards of system-on-chip (SoC) design, engineering teams need a scalable verification solution that addresses all aspects of the design cycle and reduces the verification gap. This paper examines why and how a scalable verification solution that includes assertion-based techniques and improved debugging methods addresses the fundamental challenges facing design teams. The effects are improved design productivity, design quality, time-to-market, and return on investment (ROI).
The problem with existing design and verification methodologies is that verification is subservient to design. This status quo must be changed for several related reasons, especially when it comes to today's enormous and complex electronic systems. Functional errors are the leading cause of design re-spins. Functional verification, the process used to find those errors, is the biggest bottleneck in the design flow. And verification, in general, constitutes at least 50 percent of all design activity.
Yet, verification technology is falling further behind design and fabrication capability, widening the verification gap (Figure 1). This verification gap is the limiting factor to designer productivity and design possibilities. To close the verification gap, verification must become an integral part of the overall design methodology.
The whole design and verification flow must be structured, based not only on what's good for the design engineers, but also on what's good for the verification engineers. This has implications for design partitioning, block sizing, design rules, and many other things that tend to be taken for granted today.
Another challenge to successful system verification pertains to testbenches. As design size increases, verification complexity rises exponentially. While simulation capability tracks with design size, the complexity of the testbenches does not.
Part of the reason for this is the dramatic effect that size has on observability and controllability of the design. This increases the number of tests that need to be run, plus those tests are likely to get longer, and when things go wrong, it is a lot more difficult to find out why.
To fix both the verification gap and testbench problems, we need scalable verification solutions that include assertion-based techniques that handle multiple levels of design abstraction and tools that traverse all stages of the design flow. These functional verification strategies must target the complete system including digital hardware, embedded software, and mixed-signal content at every level of design and every phase of the design flow.
Figure 1 The verification gap leaves design potential unrealized. This means that the potential for something to go wrong is greater, and the verification task has become exponentially more complex. (Source: SIA Roadmap, 2001)
II. The functional verification crisis
The rising importance of functional verification arises from the progressive growth in design size and complexity, including the increasing proportion of software and analog in the design mix. Increased size refers to the enormous number of transistors and, therefore, gates on an SoC.
In 2001, the International Technology Roadmap for Semiconductors predicted that by 2006 SoCs will contain a billion transistors. A single SoC already can consist of tens of millions of gates, raising the potential for errors and complicating the verification task.
Increasing complexity means more variety, and more of it on a single chip. The variety of components includes high-performance RISC CPUs, multi-gigabit high speed I/Os, block RAM, system clock management, analog mixed signal, embedded software, and dedicated digital signal processors (DSP). As a result, the interfaces between these components have become increasingly significant to overall functionality and performance.
The increased presence of on-chip software and analog devices not only contributes to system complexity but also challenges the traditional ways of doing things. Digital engineers must confront unfamiliar analog issues. Many hardware designs require the firmware or low-level software to be present and functional in order to verify RTL functionality. This requires that the firmware designers take an important role in hardware design and account for the detailed interplay of hardware and software.
A look at the data from Collett International Research Inc. studies from 2001 and 2003 shows the growth in impact of these interfaces (Figure 2). In the 2001 study, 47 percent of all failures were related to logical or functional errors. However, only one interface issue appeared in the top ten causes of failure: the mixed-signal interface, which contributed to 4 percent of chip failures.
Contrasting this to the 2003 data, the logical and functional failures have moved up to 67 percent, and there are now three additional categories that have appeared. Analog flaws make up 35 percent of chip failures, number two on the list. Mixed-signal interfaces have increased from 4 to 21 percent, and hardware/software interfaces account for 13 percent of failures.
In addition to this complexity, legacy and IP must be accommodated, as over 50 percent of both design and testbenches are reused [Collett International Research, Inc. 2003]. Thus, any meaningful solution must support all major languages including Verilog, VHDL, C++, and SystemC so that it can work at all levels of abstraction. Open standards ensure that legacy design and testbenches are reusable and that verification tools can be chosen by virtue of their positive attributes, not for the reason that they fit a particular vendor's tool environment.
Furthermore, because observability and controllability scale inversely with design complexity, debug methodologies must overcome testbench complexity. For example, as the design doubles, observability will halve and controllability will halve, making verification approximately four times as difficult.
Figure 2 Survey trends highlight the impact of SoC integration.
As stated, to address increased design size, complexity, and performance, the verification methodology must be scalable across tools and levels of design hierarchy. It must scale across verification domains, with the ability to communicate between simulation, co-verification, emulation, and analog-digital simulation. And it must not be confined to the dynamic space, but must move freely into the static space.
For example, formal equivalence checking is a requirement for big designs with many modifications at the gate-level. Finally, it requires better testbench methodologies to enable the creation of more effective tests.
III. Scalability across tools
The requisite solution should comprise a suite of tools that work together to form a complete path from HDL simulation to in-circuit emulation. This means better simulators and emulators to speed up the verification process at all levels of integration. Scalability across tools is necessary because various types of verification provide different solutions at different performance ranges. Each solution involves a trade off between many different attributes, such as iteration time, performance, capacity, debug visibility, and cost.
Even HDL execution engines require a range of solutions. Some perform better at the block level, others at the chip or system level. For example, designers require high-level verification tools for verifying system-level DSP algorithms; an HDL software simulator would not do the job. Conversely, in-circuit emulation would not be an appropriate solution for verifying relatively small sub-blocks of a chip design when an HDL software simulator could quickly and easily accomplish the same task.
Figure 3 A scalable solution consists of a variety of methodologies and tools. As a whole, this continuum covers the gamut of verification completeness, visibility, and performance choices.
Recognizing which tools are optimal for the verification task at hand, and having those tools available, will yield the best productivity for the designer. The following is an example of the technologies available for digital verification of a design. (There is a similar continuum for software and analog mixed-signal verification.)
- Software simulation is ideal for block-level verification because of very fast turnaround times and debug capabilities.
- Hardware/software co-simulation enables embedded software to be brought into the verification process and provides a means for accelerating the processor, memories, and bus operations. It can also be used as a testbench to verify the hardware.
- Transaction-based co-modeling provides a large variety of solutions that enable system verification. Co-modeling is ideal for linking high-level, abstract testbenches with an RTL implementation of the whole chip loaded into an emulator.
- Emulation (in-circuit emulation) provides high-capacity and high-performance verification within the real system. Emulation gives designers the confidence that their chip will function correctly in the actual system.
- Formal verification (equivalence checking) has the capacity and speed to ensure that modifications made late in the design flow do not change the intended behavior.
An important item to note is that high-performance, hardware-assisted or hardware-oriented solutions are critical to achieving verification completeness in system-level environments.
IV. Scalability across levels of abstraction
It is essential to move some aspects of functional verification forward, making it part of the initial phases of the design process. To accomplish this, verification must become more abstract with higher-level models and transactors (Figure 4).
Figure 4 Shifting abstractions enables a lower level implementation to be simulated within a higher level context.
Moving verification forward in the design flow has a number of significant advantages. Models at this stage are much faster to write, have higher throughput, and thus can constructively influence design decisions. Abstraction speeds up verification by leaving out non-relevant information, reducing development time, speeding debug, and making testbenches more reusable.
With complex SoCs, it is too time consuming and difficult to do everything at the RTL or gate level. There comes a point when more abstract representations of the design become absolutely necessary. This is not just for the design but also for the testbench.
For this multiple levels-of-abstraction strategy to work, it is not only the tools that are necessary; intellectual property (IP) is equally important. Technology that allows multi-abstraction simulation is not useful without the models that allow designers to switch between levels of abstraction and tie the levels together. Multi-abstraction solutions combine both technology and IP.
Hierarchical verification is made possible using a set of transactors for the principle interfaces of a design. This allows for a mixing of design descriptions at various levels of abstraction. The transactors can be assembled as a testbench or an environment to check that an implementation matches a higher-level model.
An advantage of this strategy is that it does not require all of the models to exist at a single level of abstraction. This flexibility allows the team to mix and match whatever is available at a given time and provide the necessary level of resolution relative to execution time.
Transaction-based interfaces can link complete, abstract system models to the design, providing an ideal system-level testbench. For example, using transaction-based simulation, a team can define a system at a high level of abstraction. They then take single levels, or single blocks, within that high level system definition and using the IP required for the transaction to work substitute them into a more detailed implementation model.
They can run the model in situ on the system as an instant testbench. The team immediately has real use of the existing testbenches, resulting in a natural stimulus provided to the block. The result is higher verification productivity and higher confidence in the design.
V. The levels of abstraction
System-level verification requires scalable solutions that support abstractions throughout the entire electronic system: block, sub-system, full-chip, and system level.
At the block level, designers are focused on the details of functionality and timing so they can certify that these blocks meet specifications with no obvious problems. The goal is to find as many bugs as possible, since this is the cheapest and fastest stage in the design process to find them. Analog and digital interactions are verified at the block level. Functions and codes are fully exercised and verification sign-offs should be conducted at this stage. HDL simulation is the tool of choice due to its ease of use and debug capability.
The growing capacity of SoC designs with analog and mixed signal components demands a simulation environment capable of the same verification functions that are needed for digital logic. A smooth interface to analog HDL behavior simulation, as well as Spice simulation of analog primitive modules, allows simulation of both digital and analog components to be synchronized and viewed in the same debug environment.
Once all of the blocks have been verified, block integration takes place, involving integration of either groups of blocks or an entire chip. During the sub-system phase, inter-block communication, control, timing, and protocols are important for functionality; therefore, tools that check protocols or apply assertions to verify bus transactions are useful. Hardware acceleration or emulation can be deployed at this stage with HDL, C, or other high-level testbench languages such as SystemC and Verisity.
SoC-level verification is concerned with the further integration of blocks and the remainder of the design process, including the physical implementation of the design. As designers integrate smaller blocks into larger and larger blocks, there is more content to simulate, test runs get longer, and more simulation is required to verify the design.
This calls for multiple verification approaches, such as chip and system functional testing. It also requires verifying that the tools, such as layout, clock tree, or DFT insertion did not introduce unintended changes. Equivalence checking tools can verify entire large-scale designs and debug them rapidly after every design modification without the need to run many long simulations.
In addition to equivalence checking, emulators and simulation farms may be used during this process to ensure that nothing has been broken during the design changes. This is known as regression testing. Simulation farms yield very high throughput for shorter tests. For longer tests, hardware emulation is the preferred methodology due to its power and capacity to verify large chip designs. Both simulation farms and emulators are complementary solutions that can be used effectively in different environments.
Most SoC devices contain embedded software that must be verified. This includes application code, real-time operating systems (RTOS), device drivers, hardware diagnostics, and boot ROM code. Functionality continues to be important, but in addition, throughput and other system-level issues may need to be verified. Running large amounts of software usually means long simulation runs.
Hardware/software co-simulation solutions provide ways to reduce this overall burden while providing efficient debug and analysis environments. For even longer runs, the design may need to be moved, in part or in its entirety, into hardware solutions, but the same or equivalent debug environments should be preserved so that the migration between these execution environments is minimized.
VI. Improved debug solutions
To support a scalable verification solution, debug tools must be integrated, consistent across levels of abstraction, and consistent across scalability tools. The goal is to improve how quickly bugs can be identified, the cause tracked down, and the problems fixed, minimizing the feedback time and reducing iteration loops to a minimum. At present, over 50 percent of the time of both the design and verification teams is taken up with debug, and so improvements in this area can have a significant impact on time-to-market.
At the system level, debug is made more complex by mixed-levels of abstraction and by the differing semantics that exist within a system. This becomes even more challenging within a heterogeneous environment, such as hardware and software or digital and analog. Therefore, information must not only be made available but also must be available in the correct semantic context. Also, with multi-abstraction, the information must be available at the required level of abstraction.
For example, when debugging software, all of the information about the software program execution is contained within the memory of the hardware, and yet none of it is readily available. Knowing where a variable is located is just the start of the solution. It also must be determined which chip the information is contained in and the relative address within that chip, assuming it is not in a cache or a register. Even then, in many cases, the data is not in a logical order within the chip because of data or address interleaving. Therefore, getting the value of a variable can be very complex.
To address some of these challenges, new debug methodologies are becoming more common, for example assertions or checkers, yet their usage is not fully understood. Another area of confusion concerns coverage. What many engineers don't realize is that satisfying code coverage metrics does not mean that the system has been adequately verified. Additional metrics, such as functional coverage or assertion coverage, must also be used to ensure that the design is being fully verified.
Most engineers today create stimuli that they feed into an execution engine so they can analyze the response produced (Figure 5). In many cases, they compare the waveforms between one implementation of the design against a golden model, looking for differences. This is a tedious, hit-and-miss way to debug and is the reason why many mistakes get missed. It is all too easy to concentrate on the problem at hand, missing the fact that something else went wrong or that the current testbench just did not reveal the new problem.
Figure 5 Verification components.
Designers must get away from the tedious, repetitive, blind alley nature of most current debugging methodologies. In the later stages of the design process, equivalence checking can be a very powerful tool. Equivalence checking is used to test implementations against a golden model, but in a formal method, rather than trying to compare two sets of waveforms through simulation.
Recently, some additional testbench components have matured to the point of usefulness, such as generators, predictors, and checkers. These allow test scenarios to be automatically generated and the corresponding results checked for legal behavior. The most mature of these are the checkers, in other words, assertions.
Two types of assertions exist, namely test dependent and test independent. Test independent assertions can be easily inserted into an existing verification methodology without the need of additional tool support; whereas test dependent ones, coupled with generators, require additional tools and methodology changes.
The story does not end there, because there are still a number of testbench components that are not well defined today, namely functional coverage, test plans, and verification management. While the completion of this testbench transformation is still a number of years off, once completed, the long sought after dream of an executable specification will be realized, but not in the way that the industry first predicted. It will not be used to automate the design flow, but instead will automate the verification flow.
VII. Assertion-based verification
As previously mentioned, a testbench is constrained by two independent factors: controllability and observability. Controllability can be equated to the ability of a testbench to activate a problem in the design by the injection of stimulus. This has a very close relationship with code coverage metrics. This is why care must be taken with the utilization of code coverage as it does not take into account the other aspects of the testbench.
The other half of the problem is observability. Once the problem has been exercised, two things must happen. The first is that an effect of this problem has to be propagated to a primary output. Then the problem must be detected. For most testbenches the number of primary outputs being verified is very small, so many problems are never even noticed (Figure 6).
Figure 6 Testbench complexity. With traditional testbenches, the problem must be propagated to the output and must be detected.
This is why assertions are so powerful. Assertions positively affect observability, providing several benefits (Figure 7). They can identify the primary causes of what went wrong rather than secondary or tertiary causes making debug much easier and faster. This is because they can be scattered throughout the design, creating virtual primary outputs, which automatically check for good or bad behavior.
As a result, the testbench does not have to propagate those fault effects all of the way to the actual primary outputs, making the development of testbenches easier. Additionally, large amounts of data that otherwise would have been ignored are verified.
Figure 7 Assertions bring the point of detection closer to the point of problem injection so that it is not necessary to propagate all effects to primary outputs.
Assertions also perform data checking, making testbenches more effective. Once an assertion has been designed and put into a design, it is always operating. In many cases, the assertions are checking things that are not the primary reason for the test, and thus they find unexpected problems. For example, an assertion injected at the module test stage will still be performing its checks throughout the integration phase and into system level testing, thus providing much better verification coverage.
Finally, assertions make the breadth of the test much broader. Engineers that use assertion-based verification techniques often find that their bug-detection rates early on are much higher than when not using assertions. This offsets the overhead involved in writing and placing assertions about a 3 percent time overhead and a 10 percent runtime overhead. Companies using assertions report that a large percentage of their total bugs were found by the assertions and that their debug time was reduced by as much as 80 percent.1 2 3
Assertions can be built into the design, or they can be specified independent of the design and attached to various points in the design. Whether they are internal or external is partly dependent on who is creating the assertion, such as the designer or an independent verification engineer.
When embedded in the design, they primarily verify an implementation of a specification. When developed externally, they validate the interpretation of a specification, or in some cases the specification itself. Because embedded assertions are in effect executable comments, they can be placed anywhere a comment might be placed.
The advantage is that now the comment is significantly more worthwhile, because it does something active. This includes comments that describe intended behavior, assumptions that the designer has made, or constraints on its intended usage. This supports reuse by providing all kinds of information about the expected behavior of the design as well as the intentions of the original designer. All third-party IP should come with at least interface and usage assertions built in.
Currently, the primary interest in assertions is about how to simulate them, but this isn't all that can be done with assertions. Assertions are based on something more fundamental, called properties. Properties can be used for assertions, functional coverage metrics, formal checkers, and constraint generators for pseudo random stimulus generation.
Properties can be used by both simulators and formal analysis tools, initiating the merger of both static and dynamic verification techniques into a single methodology. With the advent of standards in this area, a rapid growth in tools that use properties can be expected over the next few years.
Design teams need to improve existing methodologies with tools that scale across design complexity and multiple levels of abstraction. A scalable solution enables engineers to do what they do today, only better, faster, and more often within the same time frame. It makes the verification tools more user friendly and enables more vectors to be pushed through a design.
Any effective system verification strategy must begin with the premise that the system really is the entire system, and it includes things other than digital hardware. In other words, a meaningful solution must address analog and provide solutions for software, RTOS awareness, and the environment into which these things must operate, tied together into a unified solution.
New testbench components are making their way into verification methodologies today, and the use of assertions can have a dramatic effect on the quality and speed with which verification can be performed. In addition, a number of even newer testbench components are emerging. All of these new components will be driven by, or manipulate, properties. This is where the future lies, and that future is beginning to look very bright.
This automated, properties-based verification approach will deliver the boost in performance necessary to narrow the verification gap. This is in effect the equivalent of the synthesis benefit that the design path enjoyed over a decade ago. Verification synthesis is on its way and will fundamentally change the way the verification problem is viewed and handled.
1 I'm Done Simulating; Now What? DAC 1996.
2 Functional Verification of a Multiple-Issue, Out of Order, Superscalar Alpha Processor. DAC 1998.
3 Verbal accounts from HP and other internal company studies.
Brian Bailey is the chief technologist for the Design Verification and Test Division of Mentor Graphics. He is the Chair of the Accellera Interfaces technical committee, which has just released its first standard for scalable verification. Bailey has two patents issued and several pending, and he is a regular presenter at industry conferences.