Functional verification has become the congestion point for many designs and the bane of many companies trying to sell their IP. While it may appear as though there have been a plethora of new products over the past decade, I have often said that many of those so called advancements were superficial and that they were delaying the inevitable. Now, it is time for me to stop saying what is wrong and to start talking about a better direction for the future that solves many of the fundamentals problems. In this paper, I will review some of my past comments, define what is missing from the existing verification environments and, finally, discuss a way in which many of those problems can be solved or, at the very least, improved.
Constrained random test pattern generation was hailed as a great advance over the previous mainstay of functional verification - directed testing. You will not find me disagreeing with that statement in general, but not for the reasons you might expect . The big advance was that it crystallized the need and advantage for a second reference model of the system. With directed testing, the second model was distributed among the test cases in the form of the expected results. Most tests had to be updated when a design or specification change was made. Only one model is affected with a centralized reference model. I will also admit that there is some value in randomization. Being able to explore cases that were not originally considered can bring about some surprises and find bugs that would have been missed by directed tests. Constrained random was also touted as a way to tradeoff the use of highly skilled people for automation, but it turns out, at a high price. The size of server farms and the number of simulator licenses per engineer has gone out of control. Mentor Graphics, one of the principal providers of simulation products based on constrained random, published a study  showing that constrained random techniques may produce between 10 times and 100 times more vectors than is actually necessary to obtain desired coverage.
New languages have been developed in an attempt to make the description of this methodology easier. The early languages, such as Vera, have been revamped into SystemVerilog and that language has been extended to include assertions, coverage and other parts of the verification environment. We have seen the emergence of a plethora of verification methodologies, starting with eRM and VMM and ending with UVM. Each of these methodologies creates rules and guidelines for improving the construction of testbenches based on constrained random techniques. It ensures that testbench components can be interchanged and reused and reportedly decreases the time it takes to create the testbench, but then engineers still have to generate and execute the thousands of testcases that will be generated, meaning that the end-to-end time and effort reduction is probably quite minimal.
So, thanks to constrained random, we now have a centralized reference model, but unfortunately that is not enough. In my paper titled “How many models does it take,” first published in 2007 , I came to the conclusion that it is at least 2 1/2 models and could be as many as 4. In addition to the implementation model and the reference model, we need a coverage model to indicate when we are done, and possibly an environment model to ensure that only valid testcases are generated. To add to the problems, we still cannot tell when the coverage model is complete except by falling back on the code coverage metrics that were used with directed testing.
One of the points I made about constrained random techniques  is that as designs get bigger and more complex, the power and effectiveness of this technique goes down. This is related to the sequential depth of the design making it less and less likely that constrained random will come up with anything resembling a useful testcase. Think of the old adage that says – “Given enough monkeys and enough time, one of them will write Shakespeare.” Constrained random is the same and, as designs get larger, the chances of the monkeys even writing something intelligible becomes unlikely. Now lest you think I am a nay sayer, things are improving, and the path ahead is becoming clear.
While constrained random coerced the industry into developing a reference model, that reference model is written in the same basic way as the design model. It is a procedural model often written using a higher level of abstraction in languages such as C, C++ or SystemC. Verification relies on two models being developed independently and being compared. It doesn’t matter if the comparison is done using simulation, along with a sampling of possible stimulus patterns, or using formal verification that spans the entire state space. However, I believe there are additional benefits when the two models are written in different ways. An example of a different way can be seen with assertions. An assertion defines a cause and effect and a number of possible paths to implement that relation. The languages used to define assertions are based on declarative programming concepts rather than procedural.
Assertions have not taken off in the way that the industry expected. One of the problems, in my opinion, is the complexity of the languages, such as Property Specification Language (PSL) and SystemVerilog Assertions (SVA). Companies have tried to help this adoption barrier by providing things such as graphical representations .
The adoption of formal verification is definitely advancing and sales of such tools are rising faster than other verification techniques . Formal not only find bugs that simulation based verification misses, but can do it in a more efficient manner. Additionally, when a counter example for a proof is needed, the tools can provide the most optimal stimulus to demonstrate the problem. Using techniques such as design abstraction and being able to focus the tools on specific problem areas, formal tools can now handle designs much larger than just the block level that they have traditionally been associated with.
One thing that both formal and the usage of assertions have in common is that stimulus is not the focus. While stimulus can aggravate a bug, this is only the beginning of the problem. Unless that bug is manifested in some kind of failure that is detected, then the bug continues to lie dormant . Constrained random doesn’t understand this concept, but formal and assertions do. Formal verification doesn’t care about stimulus at all; it cares about finding differences that are manifested on the observable outputs of a design and, when differences are found, it generates the stimulus necessary to demonstrate the problem. This will help get to the root cause of a bug a lot quicker.
However, there are going to be few designs that are verified using only formal verification. We have to tackle the problems created by constrained random verification and it would be preferable if it inherently understands that verification requires stimulation, propagation and detection.