Functional verification has become the congestion point for many designs and the bane of many companies trying to sell their IP...
Having talked about what is wrong with the status quo, it
should be possible to define the attributes that a verification
environment should have, or the features we may like it to have. I am
sure anyone reading this will be able to add a few to the list as well.
It should be a fully integrated approach in that it combines the notions
of the reference model with the checker and can track coverage. By
doing this all aspects of verification can be properly dealt with, using
fewer models or verification components. It would be desirable for it
to be output based rather than stimulus based.
It should create an
optimal set of tests such that simulation farm sizes and licenses can be
optimized along with all of the power and space savings that could be
reaped. There should be an inherent notion of completeness rather than
the subjective ones in use today.
It must be hierarchical and
composeable in that it should be able to handle the verification of
blocks, sub-systems or systems without modification. Multiple
block-level verification models should be able to be composed into
higher-level verification modules in the same way that designs are.
should be compatible with the existing methodologies. While I think
constrained random isn’t very good, it is in use by many people and
there are attributes of it that are useful such as transactors to bridge
between a high-level testbench and the RTL signal-based model. This
would also allow for a slow migration into a better methodology rather
than having to throw things away until the value of a new methodology
has been fully proven.
It should decrease the time in which engineers
can get to both first bug and last bug. This means that it has to be
possible to start using the verification environment long before it is
complete and that there has to be a reasonable confidence created in
defining a verification end point.
A new approach
Let me return to
the monkeys and their attempts to write Shakespeare. At the lowest
level, we could help them understand the concept of words. When they hit
on a good combination of letters, we could let them know that it forms a
valid word. Now, they can start randomly selecting words. The chance of
them producing the desired work has just increased.
But why stop
there? If we define some of those words to be nouns, some verbs,
adjectives etc., we can provide a set of rules about how to compose
sentences - grammar. While there is a random element to this, we have
made structure and sequence more important. At the very least the
monkeys should now be able to produce some interesting gibberish. We
could carry on with this analogy to many more levels of abstraction and
this is exactly what is happening with sequences and virtual sequences.
At each stage, we add constraints that collectively progress us towards
the cases that make sense. At some levels we need a certain amount of
randomness, but that only makes sense when it is well structured.
Structure in this case means that we are creating useful scenarios that
will make the design do interesting and realistic things. These types of
constraints can be created in the existing language and methodologies,
but building them in a hierarchical manner where complex constraints are
passed between them is difficult if not impossible.
Figure 1: Constraining behaviors
Figure 1. At the block level, there are a large number of behaviors
that are possible. As those blocks are composed into larger blocks,
constraints are added such that certain behaviors are no longer
possible. Once the system has been fully assembled only those behaviors
necessary to support the system should be possible. Any verification
performed outside of this constraint box does not directly add to the
total amount of verification performed. Now, we have to be careful. If
system behaviors are defined by software, then we have to ensure that
the hardware will continue to operate even if the software changes,
which means that we have to verify behaviors outside of the set defined
by the software. However, the hardware itself imposes many constraints
and these do define behaviors that, if verified, represent wastage.
several years Gary Smith, of the analyst firm Gary Smith EDA, has been
talking about “the intelligent testbench.” So what is it? Simply put, it
is a language, tool and or methodology that allows us to completely
verify the system-level design in the most efficient way possible. In
addition, it means that we do not perform more verification than is
actually necessary. It sounds so simple and yet it has been somewhat
elusive. Constrained random testing fails when it comes to efficiency.
Directed testing fails when it comes to completeness, and all of the
methodologies in use today over verify at the block level and under
verify at the system level.
To understand the possible realization of
such a solution, we need to think about the ways in which data can move
through a block or a system. Consider the following highly simplistic
system. It consists of a processor, a DMA controller, memory and a
peripheral. Now, suppose we want to verify a particular aspect of the
peripheral. This may requires it to be in a certain mode of operation.
If this is a closed embedded system, constrained random is not directly
useable. We would have to disconnect the processor and run random bus
cycles in the hope that the right mode would be established and the
correct test run, or we could write a scenario to make this happen.
Alternatively, we could write some code to run on the processor, but we
all know that creating directed tests is a slow and tedious process.
Figure 2: A simple SoC
could define outcome A in the peripheral that meets our needs. This
could be split into two sequential operations that have to happen.
First, the mode must be set and then data needs to be transferred into
the device. To set the mode, we can define dependencies between
peripheral operations and register values that need to be established.
Then, we have the need for a certain amount of data to be supplied from
the bus. We can also define data paths from the processor to the
peripheral, the DMA engine, the memory and the dependencies between
those devices. Additionally, we can define the ability of the DMA engine
to transfer data between the memory and the peripheral and what has to
be accomplished for that to happen. Note that this is specifying how to
support an outcome rather than just talking about constraints on inputs
as is done using existing methods.
Now, with the paths defined, we
could ask a tool to make outcome A happen. There are a few ways that it
could be accomplished. First, the processor may read setup information
from memory and transfer it to the right registers in the peripheral and
then the processor could transfer the data for the I/O operation, or it
could program the DMA engine to do that. Either way, it would be great
to have a tool generate the code necessary to make the outcome a
The beauty with doing it in this manner is that we no longer
need to have separated scoreboards and checkers – they become
integrated into the same model. In addition, it is possible to see how
many of the potential datapaths and combinations of them have been
exercised, meaning that the coverage model becomes an integral part of
the model. The story can get even better than that, because there are
many functions for which the behavioral functions can be generically
defined. That may be the case with the peripheral and the DMA engine,
since they have standardized capabilities and these can be stored in a
library for incorporation into system models. So the concept of VIP is
Another necessary attribute is that the system
should be composeable and by that I mean that verification blocks can be
combined together to form the system model in the same way that a
system is composed out of individual blocks. We have relied on
hierarchical construction in the design space for a long time and this
concept needs to be fully incorporated into the verification space as
An available tool
It must be clear that I have a solution in
mind, and I do. I was first introduced to this technology several years
ago, but at that time the tool was not ready and they were still trying
to work out some of the kinks. Last year at DAC, they gave me another
demonstration and I was very interested. This year at DVCon, there was a
paper talking about the technology . The company in question is Breker
Verification Systems, and as a full disclosure, I have done consulting
for them and hope to again in the future. The tool, TrekSoC generates C
test cases to run on the embedded processor and coordinates activities
with the external testbench. It is based on outcomes and the generator
randomly chooses from the available paths to make that outcome happen.
The DVCon paper concludes: We have been surprised by the RTL bugs found
thanks to this new environment. The resulting benefits were a more
readable testbench, which is easier to maintain, shorter debug sessions
and increased quality of the IP.
It is rare for change
to happen quickly, but totally bizarre when it is an inherently bad
technology that gets adopted, along with requiring a new language,
coverage model and methodology. But that is indeed what happened with
constrained random test pattern generation. Some good did come out of
that change, but not as good as it could have been. Now that people
realize the scale and increasing size of those limitations, the industry
is looking for an alternative. In addition, the problem is changing and
verification at the SoC level is becoming mandatory, a task beyond the
scope of constrained random.
I believe that relief is on its way and
one solution has been demonstrated to be an effective replacement. While
it is not certain that Breker is the company that will ultimately
succeed in this market, it is the first to have shown a fully working
solution that is being used to verify complex systems. Monkeys can
indeed write Shakespeare if you give them the right tools and
- Brian Bailey. Constrained random test struggles to live up to promises. SCDSource March 2008
- Mark Olen. Intelligent Testbench Automation Delivers 10X to 100X Faster
Functional Verification. Verification Horizons Blog June 2011
- Brian Bailey. How many models does it take? Initially published on electronicsystemlevel.com in 2007
- Brian Bailey. Zocalo Tech helps with Assertion Adoption. Techbites.com November 2010
- Brian Bailey. Innovations in Formal verification. Chip Design Magazine – The ESL Edge 2009
- Brian Bailey. The Great EDA Coverup. EETimes, November 2007
- Dennis Ramaekers and Gregory Faux. Graph-IC Verification. DVCon 2012
Brian Bailey – keeping you covered
If you found this article to be of interest, visit EDA Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to all aspects of Electronic Design Automation (EDA).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for the EDA Designline weekly newsletter – just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you [grin]).