Abstract Functional verification has become the congestion point for many designs and the bane of many companies trying to sell their IP. While it may appear as though there have been a plethora of new products over the past decade, I have often said that many of those so called advancements were superficial and that they were delaying the inevitable. Now, it is time for me to stop saying what is wrong and to start talking about a better direction for the future that solves many of the fundamentals problems. In this paper, I will review some of my past comments, define what is missing from the existing verification environments and, finally, discuss a way in which many of those problems can be solved or, at the very least, improved.
Introduction Constrained random test pattern generation was hailed as a great advance over the previous mainstay of functional verification - directed testing. You will not find me disagreeing with that statement in general, but not for the reasons you might expect . The big advance was that it crystallized the need and advantage for a second reference model of the system. With directed testing, the second model was distributed among the test cases in the form of the expected results. Most tests had to be updated when a design or specification change was made. Only one model is affected with a centralized reference model. I will also admit that there is some value in randomization. Being able to explore cases that were not originally considered can bring about some surprises and find bugs that would have been missed by directed tests. Constrained random was also touted as a way to tradeoff the use of highly skilled people for automation, but it turns out, at a high price. The size of server farms and the number of simulator licenses per engineer has gone out of control. Mentor Graphics, one of the principal providers of simulation products based on constrained random, published a study  showing that constrained random techniques may produce between 10 times and 100 times more vectors than is actually necessary to obtain desired coverage.
New languages have been developed in an attempt to make the description of this methodology easier. The early languages, such as Vera, have been revamped into SystemVerilog and that language has been extended to include assertions, coverage and other parts of the verification environment. We have seen the emergence of a plethora of verification methodologies, starting with eRM and VMM and ending with UVM. Each of these methodologies creates rules and guidelines for improving the construction of testbenches based on constrained random techniques. It ensures that testbench components can be interchanged and reused and reportedly decreases the time it takes to create the testbench, but then engineers still have to generate and execute the thousands of testcases that will be generated, meaning that the end-to-end time and effort reduction is probably quite minimal.
So, thanks to constrained random, we now have a centralized reference model, but unfortunately that is not enough. In my paper titled “How many models does it take,” first published in 2007 , I came to the conclusion that it is at least 2 1/2 models and could be as many as 4. In addition to the implementation model and the reference model, we need a coverage model to indicate when we are done, and possibly an environment model to ensure that only valid testcases are generated. To add to the problems, we still cannot tell when the coverage model is complete except by falling back on the code coverage metrics that were used with directed testing.
One of the points I made about constrained random techniques  is that as designs get bigger and more complex, the power and effectiveness of this technique goes down. This is related to the sequential depth of the design making it less and less likely that constrained random will come up with anything resembling a useful testcase. Think of the old adage that says – “Given enough monkeys and enough time, one of them will write Shakespeare.” Constrained random is the same and, as designs get larger, the chances of the monkeys even writing something intelligible becomes unlikely. Now lest you think I am a nay sayer, things are improving, and the path ahead is becoming clear.
Good attempts While constrained random coerced the industry into developing a reference model, that reference model is written in the same basic way as the design model. It is a procedural model often written using a higher level of abstraction in languages such as C, C++ or SystemC. Verification relies on two models being developed independently and being compared. It doesn’t matter if the comparison is done using simulation, along with a sampling of possible stimulus patterns, or using formal verification that spans the entire state space. However, I believe there are additional benefits when the two models are written in different ways. An example of a different way can be seen with assertions. An assertion defines a cause and effect and a number of possible paths to implement that relation. The languages used to define assertions are based on declarative programming concepts rather than procedural.
Assertions have not taken off in the way that the industry expected. One of the problems, in my opinion, is the complexity of the languages, such as Property Specification Language (PSL) and SystemVerilog Assertions (SVA). Companies have tried to help this adoption barrier by providing things such as graphical representations .
The adoption of formal verification is definitely advancing and sales of such tools are rising faster than other verification techniques . Formal not only find bugs that simulation based verification misses, but can do it in a more efficient manner. Additionally, when a counter example for a proof is needed, the tools can provide the most optimal stimulus to demonstrate the problem. Using techniques such as design abstraction and being able to focus the tools on specific problem areas, formal tools can now handle designs much larger than just the block level that they have traditionally been associated with.
One thing that both formal and the usage of assertions have in common is that stimulus is not the focus. While stimulus can aggravate a bug, this is only the beginning of the problem. Unless that bug is manifested in some kind of failure that is detected, then the bug continues to lie dormant . Constrained random doesn’t understand this concept, but formal and assertions do. Formal verification doesn’t care about stimulus at all; it cares about finding differences that are manifested on the observable outputs of a design and, when differences are found, it generates the stimulus necessary to demonstrate the problem. This will help get to the root cause of a bug a lot quicker.
However, there are going to be few designs that are verified using only formal verification. We have to tackle the problems created by constrained random verification and it would be preferable if it inherently understands that verification requires stimulation, propagation and detection.
Sounds similar to what we have done with VHDL's OSVVM methodology (see http://osvvm.org/ ). OSVVM is an intelligent testbench methodology that focuses on functional coverage modeling. It also allows the mixing of coverage driven randomization (we built randomization directly into the coverage model to do this), constrained random, directed, algorithmic, and file based test methods and can be adopted as needed without throwing out your current VHDL testbenches.
OSVVM consists of an open source set of VHDL packages that use protected types (VHDL-2002) and is supported by current VHDL simulation tools that support protected types (I have used it on both Mentor's and Aldec's simulators).
Does Breker support VHDL?
welcome to our website:
------- http://www.simpletrading.org/ --------
if you like to order anything you like.
please just browse our website Quality is our Dignity;
Service is our Lift.
"Constrained Random" (CR) is just a nice way of saying we are going to test the corner cases we can think of and see if we can get code coverage on the rest with some random stuff.
Most logic designs should verifiable with formal methods (and assertions are essential to that), CR is a fallback. The benefits of CR would be more pronounced if the underlying DUT simulation actually included all the effects of manufacture - like Silicon variability, and analog effects (thermal, power management, crosstalk etc). The SystemVerilog committees have studiously avoided adding support for that, so the benefits of CR are being somewhat oversold.
I still believe a great opportunity was missed with SystemVerilog. It's basically been a huge distraction; it's yet another slightly better procedural language when we already had two competent OO languages in C++ & Java.
What's needed is a radically different approach. We need to put our efforts into declarative requirements languages which are divorced (orthoganol if you prefer) from the procedural implementation and develop tool suites that parse the requirements to generate the procedural tests.
In a sense that's what PSL is about, however all the current formal languages suffer from a major, unresolved shortcoming - we don't currently have a way to know if we have enough 'rules' to fully describe a behavior. Put another way, we don't know if the boundary we've described for the operating envelope is continuous, or if it has holes.
That's where we ought to have put our efforts instead of allowing ourselves to be distracted by small productivity gains when we really needed a quantum leap.
Assertion checking has had a problem with tool support and methodology maturity. Especially with cryptic assertion syntax, good support for checking the assertions themselves is needed.
In general, more effort is being thrown into verification at RTL to make up for weaknesses earlier in the design flow.
you had me at:
"Enough of the sideshows – it’s time for some real advancement in functional verification!"
and sealed the deal with:
"It is ... totally bizarre when it is an inherently bad technology that gets adopted, along with requiring a new language, coverage model and methodology. But that is indeed what happened with constrained random test pattern generation."
I'm sure you'll have people chiming in on this, but my additional take on this is that the quality of code we write is what's driving the need for increasingly sophisticated/elaborate tools (i.e. intelligent testbench tools don't make much sense when you're design and testbench code is ill-equipped and/or garbage). "Trying harder" isn't going to help (I've received that advice before) so we need new ways to write code. A technique we can start using is test-driven development. catch the bugs as they are written instead of creating a bunch of bugs then finding/fixing them later. Write 10-20 lines of code at a time, test them to make sure they're right then move on. For software folks tdd is old hat. In hardware tdd would be necessary innovation.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.