methodologies in use today are not appropriate for the system
verification task. There used to be a rule in IT: Nobody got fired for
buying IBM. Today that rule may be: Nobody got fired for relying on the
Universal Verification Methodology (UVM).
This methodology was
designed to operate at the block level and cannot detect typical
system-level errors. The reasons for this are clear. The first is that
UVM is based on hardware verification languages (HVLs) and cannot create
code suitable for execution on embedded processors. Many sub-systems
contain a processor that must be removed from the design and replaced by
a bus functional model before UVM can assist with any verification.
This makes it an inaccurate representation of the system and unable to
perform verification tasks, such as performance verification, in any
The second problem is that UVM, or any
verification strategy based on pseudo-random stimulus generation,
becomes less efficient and effective as the sequential depth of the
design increases. To attempt to overcome this limitation, it defined
sequences as a way to define snippets of legal and useful activity.
Higher-level sequences can be created from lower-level sequences as the
design is integrated. Unfortunately, this is not a scalable process as
it requires new virtual sequences to be defined every time a different
configuration or variant is created.
A scalable verification
strategy has to be able to handle verification at multiple levels of
abstraction and in various execution environments. For example, a
significant amount of verification can be performed on an abstract model
of the system, often called a virtual prototype. Some of these tests
need to be repeated on implementation models in order to ensure that the
system has not regressed. With the system implemented at the register
transfer level (RTL), simulator performance may be slow. Emulation or
prototyping can be used to accelerate execution. It would be desirable
to have a single verification approach that is able to drive all of
these environments, and even the final silicon. This kind of reuse is
not supported by UVM.
For those companies that do perform
system-level verification, it normally involves hand-writing code to run
on an embedded processor. This is a slow and tedious task that becomes
even more difficult when multiple processors are involved or where
operations need to be synchronized activity external to the chip. Most
hand-written tests are single threaded and run on a single processor
with no coordination to other processors or the testbench.
more automated solution, at least as effective as constrained-random
stimulus generation is for the block level, is required for the system
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.