Verification efficiency is the latest topic being discussed among engineers and EDA vendors. Engineers are wondering how to leverage
all of the point tools that have been developed to solve specific
issues to create a single, cohesive methodology that some call
This paper describes how engineers doing system-on-chip (SoC) verification can be more
efficient by using a single, reconfigurable verification system,
applications, and a unified methodology that allows engineers to execute
hardware and software tests with a flexible mix of performance and
debugging. New transaction-based verification techniques based on a "Co-Verification Debugger" are demonstrated for an ARM SoC design.
In order to work smarter, engineers can make improvements in one of
the three areas that take up the majority of their time during the
- "Test creation" is the time spent to construct the verification
environment, including testbenches, testcases, and models. This
process is mostly manual with some automation.
- "Test execution" is the time spent to run the tests. Increasing
raw performance is the primary way to run the tests in a shorter
period of time.
- "Interpreting test results and debugging" is the time spent to
decide if a test is working and how to find and fix problems for
tests that are not working. This is also a mostly manual process.
The three components of verification
SoC verification has three major components. A unified methodology
must provide not only best-in-class point tools in each area, but
also complete interoperability between them. The three components are:
- Verification platform
- Hardware verification tools and techniques
- Embedded system software testing and debugging tools and techniques
The verification platform is the method used to execute a description
of the hardware design. It has other common names, such as execution
engine or virtual prototype. The hardware design process consists of
describing the hardware using one of the two common hardware
description languages, Verilog or VHDL. This HDL representation of
the hardware design can be executed using any number of platforms or
Four distinct methods for execution of hardware designs have been
identified and are commonly used in SoC design:
- Logic Simulation
- Simulation Acceleration
- In-Circuit Emulation
- Hardware Prototyping
Each hardware execution method has specific debugging techniques
associated with it -- each with its own set of benefits and
limitations. The methods range from the slowest execution method,
with the most thorough debugging, to the fastest, with less debugging.
Throughout this paper, the following definitions will be used:
Software simulation refers to an event-based logic simulator that
operates by propagating input changes through a design until a steady
state condition is reached. Software simulators run on workstations
and use Verilog or VHDL as a simulation language to describe the
design and the testbench.
Simulation acceleration refers to the process of mapping the
synthesizable portion of the design into a hardware platform
specifically designed to increase performance by evaluating the HDL
constructs in parallel. The remaining portions of the simulation are
not mapped into hardware, but run in a software simulator. The
software simulator works in conjunction with the hardware platform to
exchange simulation data. Removing most of the simulation events from
the software simulator and evaluating them in hardware increases
performance. The final performance is determined by the percentage of
the simulation left running in software.
Emulation refers to the process of mapping an entire design into a
hardware platform. This process has been designed to increase
performance. There is no constant connection to the workstation
during execution, and the hardware platform receives no input from
the workstation. By eliminating the connection to the workstation,
the hardware platform now runs at its full speed and does not need to
wait for any communication.
In-circuit refers to the use of external hardware coupled to a
hardware platform for the purpose of providing a more realistic
environment for the design being simulated. This hardware commonly
takes the form of circuit boards, sometimes called target boards or a
target system, and test equipment cabled into the hardware platform.
Emulation without the use of any target system is defined as
Hardware prototype refers to the construction of custom hardware or
the use of reusable hardware (breadboard) to construct a hardware
representation of the system. A prototype is a representation of the
final system that can be constructed faster and is available sooner
than the actual product. This is achieved by making tradeoffs in
product requirements, such as performance and packaging. A common
path to a prototype is to save time by substituting programmable
logic for ASICs.
Hardware verification describes the tools and techniques used to
decide if a hardware design is operating correctly. An entire
industry has emerged to provide products that augment the
verification platform to help engineers develop tests and interpret
the results. Special hardware verification languages (HVLs) have been
designed to improve efficiency and verification quality. Other
commonly used tools are code coverage, lint tools, and debugging
tools to visualize results, such as waveform viewers.
2002 saw the widespread introduction of assertions as a way to
document the designer's assumptions and the properties of the design.
 Assertions are a powerful tool to crosscheck the design's actual
versus intended behavior. They are also valuable to verification and
system engineers to formally specify the intended behavior of the
system and to make sure it is behaving according to specification.
This year, growth in the languages used for design and verification
will certainly occur with the evolution of SystemVerilog and SystemC.
Embedded system software
The final test for the hardware design is to correctly run the
embedded system software. Even if the hardware can successfully run
all of the embedded software, there is no guarantee it is bug free,
but it does indicate a fairly healthy design. HW/SW co-verification
is the best way to operate more efficiently by making sure all of the
software works with the hardware before the hardware design is
fabricated. Co-verification provides two primary benefits:
1. Software engineers have much earlier access to the hardware
design. This allows software designers to develop code and test it
concurrently with hardware design and verification. Performing these
activities in parallel shaves time from the project schedule,
compared with the serial method of waiting for the prototype to begin
software testing. Moreover, the early involvement of the software
team results in a much better understanding of the underlying
2. Co-verification provides additional stimulus for the hardware
design. In fact, it can provide the true stimulus that will occur in
the embedded system. This improves hardware verification when
compared to using a contrived testbench that may or may not represent
real system conditions. Increased confidence in the hardware design
By running HW/SW co-verification, a wide range of problems can be
found and fixed prior to silicon, such as register map discrepancies,
problems in the boot code, errors in DMA controller programming, RTOS
boot and configuration errors, bus pipelining problems, and cache
coherency mishaps. Some of the errors will be software problems and
some hardware-related. Addressing these issues must be done using a
logical and well-conceived co-verification strategy.
Co-verification requires that accurate microprocessor models and
software debugging tools be available to software engineers as early
as possible. It also requires that the verification platform provide
the best mix of performance and debugging for software engineers to
work effectively with hardware engineers.
Five distinct types of embedded system software have been identified.
The software content (that is, lines of code) increases with each
- System initialization software and hardware abstraction layer (HAL)
- Hardware diagnostic test suite
- Real-time operating system (RTOS)
- RTOS device drivers
- Application software
Matching the software with the platform
One of the main sources of confusion for projects is how to match the
type of software being developed with the correct platform or
execution engine. Figure 1 presents a diagram of this confusion.
Given five types of software and three or four verification
platforms, where should the connections occur between them? Which
type of software should be run on each type of platform? Are all
hardware platforms required?
Figure 1 - Methodology confusion
System initialization and HAL
The first software coding task is to write and test the processor
initialization software. This code includes configuring the operating
modes and peripherals (things like cache configuration, memory
protection unit or MMU programming, interrupt controller
configuration, timer setup, and DRAM initialization).
The hardware abstraction layer (HAL) is the next layer of software
that works with the initialization code to provide a common interface
for higher-level software to use for hardware-specific functionality
after the system is initialized. The HAL abstracts the underlying
hardware of a processor architecture and the platform to a level
sufficient for the RTOS kernel to be the platform.
Once the initialization software and HAL are stable, the next phase
of software development consists of developing a detailed test suite
for the hardware design. In the past, this usually took the form of
a hardware testbench. While testbenches are still necessary to
provide stimulus for external interfaces, such as networking
protocols, the software now serves as the testbench for the CPU core
A comprehensive set of diagnostic tests should be developed to verify
each subsystem and peripheral. This starts with the memory subsystem,
progresses to interrupt testing, and then moves to other IP blocks,
such as timers, DMA controllers, video controllers, MPEG decoders,
and other specialty hardware. Most of these tests do not see their
way into the final product, but they are very important because they
build the case for a solid hardware design. Creating the programs
gives software engineers a very good understanding of the hardware
and provides an opportunity to learn about the hardware specifics in
a more secluded environment.
Real-time operating system (RTOS)
The first assumption of the RTOS engineer is that the hardware is
stable. This is true if the diagnostic suite was done well. The
initial RTOS work consists of just getting it to boot on the
platform. The amount of work required for the RTOS depends on how
standard the platform is and how well the HAL was designed. During
the initial porting phase, device drivers for application-specific
hardware may be missing, but the RTOS can still boot.
Device drivers and application software
Once the platform is complete, it is time to test device drivers and
application software. At the application level, the hardware is
assumed correct and becomes more like workstation programming. If the
application crashes, it can be safely assumed it is a software
problem, not something wrong with the hardware.
Applications usually want to interface to real network traffic, see
things on the screen, and use the pointer or mouse. During
application development, hardware and lower-level software bugs are
few and far between, and the software engineer is focused on
providing robust applications with differentiating features for end
Before discussing the specifics of methodology and tools, it is
important to recall that software engineers view the world very
differently from hardware engineers. Here is a brief review of the
different perspectives of software and hardware engineers.
Software engineer's view of the world
To the software engineer, the entire world revolves around the
programming model of the embedded system. Here is a computer
"The programming model is a model used to provide certain operations
to the programming level above and requiring implementations on all
of the architectures below."
More practically, the programming model for a microprocessor consists
of the key attributes of the CPU that are necessary to abstract the
processor for the purpose of software development. As an example of a
programming model, consider the ARM9E-S CPU.
The ARM9E-S implements the ARM v5TE instruction set that includes the
32-bit ARM instruction set and the 16-bit thumb instruction set.
The details of the instruction set are an important part of the
programming model. Also covered by the programming model are details
related to the operating modes of the CPU, memory format, data types,
general purpose register set, status registers, and interrupts and
exceptions. All of these microprocessor details are important to the
Beyond the microprocessor, software engineers are interested in the
memory map for the embedded system. For a 32-bit address space, 4GB
of physical memory addresses can be accessed. All embedded systems
use only a subset of this physical address space, and the memory map
defines the location in the address space of various types of memory
and other hardware control registers. The memory map may also define
what happens if addresses are accessed where no physical memory
Commonly found types of memory in an embedded system are ROM to hold
the initial software to run on the CPU, flash memory, DRAM, SRAM for
fast data storage, and memory mapped peripherals. Peripherals can be
any dedicated hardware that is programmable from software. These can
range from small functions such as a UART or timer to more complex
hardware, such as a JPEG encoder/decoder.
The combination of the microprocessor programming model, the memory
map, and the individual hardware control registers form the software
engineer's view of the embedded system. This information becomes the
ultimate authority for all software development and is available in
the form of technical manuals on the microprocessor, combined with
the system specific memory map supplied by the hardware engineers.
Hardware engineer's view of the world
Hardware engineers have a different view of the embedded system.
Although the internal operation of the microprocessor is important to
software engineers, the internal workings of the CPU are much less
important to hardware engineers, and the bus interface is most
For the hardware design to work correctly, the logic
connected to the microprocessor must obey all of the rules of the bus
protocol. If the rules of the bus protocol are obeyed, the details
of the software tasks are not important. To hardware engineers, the
microprocessor is nothing more than a bus transaction generator.
All microprocessors use some type of protocol to read and write
memory. To the hardware engineer, the microprocessor is viewed as a
series of memory reads and writes. These reads and writes are used
for fetching instructions, accessing peripherals, doing DMA
transfers, and many other things, but in the end, they are nothing
more than a sequence of reads and writes on the bus. For years,
hardware engineers have used a bus functional model (BFM) to abstract
the microprocessor into a model of its bus. More recently, this has
been described as transaction-based verification since it views the
microprocessor as a bus transaction generator.
The solution to implement a co-verification methodology for SoC
validation and to reconcile the different views of hardware and
software engineers is to combine a single platform that provides logic
simulation, simulation acceleration, and in-circuit emulation with
application-specific solutions for co-verification and
transaction-based verification. Consider as an example an SoC that
includes an ARM microprocessor. As described in the previous section,
hardware engineers are interested in bus transactions of the CPU.
This requires a transaction-based interface that works well with the
verification platform for use during logic simulation, acceleration,
and emulation modes.
Since it needs to be used with logic simulation and later with
acceleration and emulation, it cannot be constructed such that it
will be a bottleneck to overall acceleration and emulation
performance. Software engineers require good CPU models and debugging
tools. For each of the five different types of software, they will
prefer either a software model of the ARM CPU or a hardware model of
the ARM CPU. The three primary verification platform execution
methods combined with the three representations of the ARM
microprocessor form the matrix of nine modes of operation shown in
Figure 2 - Verification operating matrix
The next sections describe how each type of software can choose to be
executed by either a software or hardware model of the ARM CPU using
one or more of the platform's execution modes.
System initialization and HAL development
Many complex SoC projects use nothing more than a full-functional
model of the microprocessor core in a logic simulator to write and
debug this code. Software debugging with waveforms requires a true
guru who understands hardware and software and can disassemble
instructions in his head using instruction fetches on the data bus.
For the ARM SoC example, the ideal debugging solution for early
development of system initialization and HAL code is one based on a
cycle-accurate instruction set simulation model tightly coupled to a
logic simulator containing the SoC hardware design. This provides
interactive, graphical software debugging for the software engineer
to single step through the code and verify register and memory
contents with excellent flexibility and control. Simulation
performance is less important because the code must be verified
line-by-line, and the number of lines of code is relatively small.
This situation is labeled as box 2 in the matrix in Figure 2.
During this development of diagnostic tests, the logic simulator
becomes the bottleneck of the verification environment. As tests run
longer and the number of tests increases, it becomes more difficult
both to verify the entire hardware design and to continue to run old
tests as hardware and software errors are fixed. This phase is also
the most crucial since it is where most hardware bugs are found.
Debugging tools for both software and hardware at this stage are very
The best solution uses simulation acceleration to increase the
simulation performance over what is a possible using an ordinary
software simulator. A simulation environment running at 10 to 100 Hz
is not fast enough for engineers to run and test. Moreover, the
memory optimization techniques commonly used by co-verification tools
are not useful because the main purpose of the diagnostics is
hardware verification. A simulation acceleration system that runs at
speeds of 1 to 10 kHz is the ideal platform for simulation
performance and debugging. The use of simulation acceleration with
the software model of the ARM is labeled as box 5 in the matrix in
RTOS and device drivers
The initial RTOS port is a good place to take advantage of memory
optimizations commonly used in co-verification, such as software
memory models. These memory optimizations retrieve instructions at a
much faster rate than using logic simulation. The result is less
simulation detail on how the ARM SoC would work, but increased
performance. Since the instruction fetch path is well verified, using
the memory optimizations makes sense, rather than going back in the
diagnostic test suite as a workaround for a low logic simulator. The
initial RTOS boot requires box 5 on the matrix in Figure 2.
Once the RTOS is booted and stable with the selected device drivers,
as shown in box 8, future work can be done using a faster execution
method, such as in-circuit emulation. The number of hardware bugs is
very small, so the increased performance is well worth any tradeoff
in hardware debugging. This shifts the focus of the software
engineers from box 5 to boxes 6 and 9.
Application software requires the highest performance and possible
stimulus from other sources, such as graphics, I/O interfaces like
USB, or networking. This is an ideal fit for in-circuit emulation.
Initial bring up for In-Circuit Emulation (ICE) is done using
In-Circuit Simulation (ICS). ICS connects the software simulator with
the target board by using the emulator as a pass-through connection
to the target system. The necessary target boards, interfaces and
test equipment are assembled in the lab for ICE. This represents a
shift from box 3 to box 9 on the matrix in Figure 2.
Hardware engineers are focused on making sure the bus interface logic
connected to the microprocessor works correctly. The bus functional
model (BFM) allows this to be done efficiently without requiring the
overhead of a full functional model (FFM) and software to run on the
CPU. There are many different kinds of BFMs available from IP
companies, EDA vendors, and microprocessor suppliers. Unfortunately,
all of them have been created using C/C++, hardware verification
languages (HVL), or behavioral Verilog or VHDL. These languages are
suitable for logic simulation, but are not efficient for simulation
acceleration and emulation.
The co-verification methodology requires a BFM that runs well for all
phases of verification, from the start of a project, as shown in box
1 in Figure 2, moving to acceleration and emulation for directed and
random testing, as shown in boxes 4 and 7. To achieve this, a
transaction-based interface to synthesizable BFM for the CPU bus is
By operating at the transaction level, the communication is
minimized between the testbench and the verification platform. Using
a synthesizable BFM and a transaction-based interface to the
verification platform optimizes performance, while simultaneously
allowing for the use of C/C++ or HVLs to create testbenches. A BFM
that works the same way from simulation to emulation and provides the
required performance, while simultaneously following the industry
trend toward verification automation, is an important part of a
unified verification methodology.
Matrix coverage is not enough
At first glance it may seem possible to build a good methodology to
integrate hardware and software if all nine boxes on the matrix are
covered. This coverage could be obtained by finding different tools
for logic simulation, acceleration and emulation. Point tools can
also be assembled for co-verification, bus models, and bus protocol
checking. Of course, the cost to purchase all these tools and the
time to evaluate all of them separately and deal with multiple
vendors would be difficult.
Unfortunately, the methodology would not
be as strong as it could be because the tools do not work together.
More than matrix coverage, what is required is interoperability of
the boxes of the matrix. The next sections describe two examples of
why this interoperability is important.
In addition to having a single system to perform all levels of
verification, one of the major barriers that prevents a design team
from performing co-verification is the lack of a common communication
medium when a problem exists. Debugging a co-verification issue may
at times take longer than running the test, because problem isolation
involves multiple teams with different expertise working in a lab
environment and viewing verification data in different formats.
Compounding the problem, software and hardware teams debug using
different techniques and view the problem from different
perspectives. The software team works with software models and
debugs using software source-level tracing and memory and register
viewing. The hardware team works with hardware design languages and
debugs by viewing waveforms with history values associated with
simulation times of read and write operations. As a result, when the
software team detects a potential hardware problem, it cannot be
described in hardware terms (time and signal value), nor is it easy
to transfer an independent test case to the hardware engineers for
further review. This will change with the adoption of a new approach called the Co-Verification Debugger.
Transactions with simulation timestamps are the communication link
between the software and hardware world. The transaction process
flow begins when software engineers write software code to initiate
bus transactions (register and memory read/write operations).
Transactions are then driven into the hardware design block with time
and signal values. Transactions link the software and hardware, as
shown in Figure 3.
Hardware engineers are already familiar with transaction-based
techniques for testbench development. Hardware engineers benefit from
a transaction-based channel to simulation, acceleration, and
emulation with a common interface to synthesizable models for the AHB
protocol for ARM designs. This allows all types of tests, ranging
from block-level tests, to directed system-level tests to random
tests to be run without any changes to testbenches or models as
different performance requirements are needed. Verification and
hardware engineers benefit from boxes 1, 3, and 5 on the matrix in
Software engineers already have a good understanding of transactions
and how register and DMA operations translate into bus transactions.
A method is needed to transparently capture the transactions to
correlate lines of software with time values and to package the
transactions for use within the software or hardware view. The Co-Verification Debugger serves this purpose.
Figure 3 - Transactions link software and hardware
Co-Verification Debugger and transaction instrument
The Co-Verification Debugger implements transaction capture and the
ability to replay transactions for both hardware and software
engineers and serves as a communication medium to correlate hardware
and software execution.
By capturing transactions during software execution within an
in-circuit emulation session, the software engineer can stop at a
particular line of code, obtain the simulation time, and send a set
of transactions to the hardware team to debug at the indicated
emulation time when the error occurred.
The Transaction Instrument captures bus transactions for use with
Instant Replay -- the playback mechanism that displays the captured
transactions within a self-contained environment as shown in Figure
4. The Co-Verification Debugger takes the captured bus transactions
from the transaction instrument and associates the emulation time
value with the associated software line that activates the
Figure 4 - Transaction instrument
Instant replay of software execution
As aforementioned, the only thing that matters in the software
engineer's view of the world is how the ARM CPU executes
instructions. Consider a scenario where a long diagnostic test has
been developed. Even with simulation acceleration, this test may run
for hours. If the test fails, the diagnostic developer will not want
to restart the test -- guessing where to put a breakpoint in the
code, restarting the test, and trying to interactively debug the
test. This process of restarting the test and moving the breakpoints
would be quite tedious.
As a way to address this problem, the engineer can run a single
simulation and save a compressed file that contains the bus
transactions at the processor interface. The memory transactions,
including address, data, and simulation timestamp along with
interrupt information, are "recorded" into this file.
simulation is complete, software engineers can start the software
model of the ARM CPU and software debugger and "re-run" the software
execution sequence. However, this time, instead of interacting with
hardware simulation, the results are read from the recorded file.
This "playback" of the bus interface replicates the exact sequence of
software execution as shown in Figure 5.
Figure 5 - Instant replay for software execution
Because the simulation now runs at MHz speeds, software engineers can
re-run the software as many times as needed to find the problem. The
simulation timestamp is also provided at any time to help correlate
software and hardware execution. This record and playback
methodology is a good way to debug long simulation tests that make
interactive debugging unproductive.
Instant replay of hardware execution
When a diagnostic developer decides a problem was indeed a hardware
issue that should be examined, hardware engineers will not want to
start a software debugger, load an executable program, and run it to
reproduce the problem; recall that hardware engineers care only about
bus transactions. This is an ideal situation for the software
engineer to pass the stimulus of bus transactions to the hardware
Now the test becomes a set of AHB transactions instead of
an ARM CPU model. This allows hardware engineers to work on the
problem using familiar transaction-based verification techniques. For
all of the diagnostic tests, a history of transaction-based stimulus
files can be saved for use as a set of regression tests. These tests
can even be modified and augmented to test "what-if" scenarios on the
bus. Instant Replay for the hardware view is shown in Figure 6.
These examples demonstrate that interoperability between the boxes of
the matrix in Figure 2 further enhance the unified methodology for
Figure 6 - Instant replay for hardware execution
This paper describes the best way to build a co-verification
methodology for SoC verification -- to combine a single platform that
provides logic simulation, simulation acceleration, and in-circuit
emulation with application-specific solutions for co-verification and
transaction-based verification. Furthermore, combining many point
tools to complete the methodology is not a good solution, because the
tools will not work well together.
Interoperability between the
platform and the application of transaction-based verification and
HW/SW co-verification is essential to work smarter, not harder.
Additionally, transactions and simulation timestamps are the
communication method for hardware and software teams to effectively
pinpoint the exact cause, time, and location of problems, as well as
to provide a stand-alone test case to replicate the problem within
 Assertion Processor, Jason Andrews, 2002
 ACM Computing Surveys, Skillicorn and Talia, 1998
 ARM9E-S Technical Reference Manual (Rev 2), ARM Limited, 2002
Jason Andrews is currently a Product Manager at Axis Systems, working in the areas of hardware/software co-verification and testbench methodology for SoC design. His experience in EDA and the embedded marketplace includes software development and product management at Simpod, Summit Design, and Simulation Technologies.