I once overheard a verification manager say, "What did I do to deserve this living hell?" Well, that is the point. It's often what we do or don't do in terms of a methodology that ultimately pulverizes our ivory tower view of the perfect process and converts it into a towering inferno.
Now, before I lead you to verification salvation with my words of wisdom, I want to first encourage you to attend this year's Design Automation Conference, where a team of verification experts (or in some cases, evangelists) will debate this very issue in a panel Wednesday, June 15 entitled "Is methodology the highway out of verification hell?"
If you have ever attended a DAC panel session, you have probably been frustrated by that one panelist who seems to act just like the Energizer Bunny (that is, keeps going, and going, and going). Hence, to ensure I have sufficient time to get my point across, I've decided to take the proactive measure of writing this article touching on a few important points that I might lack sufficient time to discuss during my presentation.
To begin, let's examine how the constraints on a design project have changed over the past ten years. Until recently, the duration of many design projects spanned many years (for example, processor chips and high-end ASICs). For these high-performance systems, the requirements specification generally remained static throughout the project, with only minor changes or corrections occurring.
What's even more intriguing is recognizing how yesterday's best practices (optimized for high-end, high-performance design) have actually influenced the way we do design in general today. In fact, if you examine many of today's processes, tools, and methodologies, you will learn that they were initially developed to support leading edge designs, often by internal CAD teams.
Fast forward ten years, and we see a very different world. Today, rapidly changing market demands, combined with fierce competition, are placing unprecedented constraints on the design process. In fact, where the process of specification was fairly static in the past, today the process has become extremely dynamic.
I've seen many cases where, to be competitive, a last minute decision is made to add a new feature, only days before tapeout. There are also cases where partially implemented features must be surgically removed from the design in order to meet the tapeout date. Hence, the ability of engineering teams to quickly validate changing requirements and feature sets, often in the middle of the development process, while still hitting tight development schedules with high quality, is key to success in today's world.
Now, this is partially where the problem arises. As I mentioned, many of today's methodologies were designed to support yesterday's high-performance design constraints. Keep in mind that a three to six month design cycle was not a constraint for previous generation designs. And unfortunately, many of the methodologies in use today are not optimized for short design cycles with rapidly changing requirements.
Yesterday's best practices
I can always quickly assess the verification maturity or sophistication of an engineering team by examining the processes and methodologies they have put in place. For example, teams I place lowest on the totem pole of my capability maturity model predominately depend on Verilog and VHDL testbenches to generate directed tests (note that these methodologies were developed decades ago).
Moving up the totem pole, you will find organizations adopting random stimulus generation, possibly combined with simple coverage techniques. Continuing up the totem pole you will find assertion and coverage-driven verification (CDV) use, generally leveraging sophisticated functional coverage models.
Note that many of these so-called more advanced techniques have been around since the mid-to-late 90's, and they were common practice with high-performance design teams that I worked on during this period. Yet, even with the best of these approaches, there is still an inherent flaw that each of these progressively advancing methodologies is attempting to patch, that is, attempting to validate all possible behaviors of a design or what can be called achieving 100% actual coverage using dynamic simulation.
I would argue that industry statistics on design success rates continue to shed light on the inadequacies and limitations of traditional verification methodologies. If truth be told, in many other disciplines (such as manufacturing) similar success or failure rates would be unacceptable.
Now don't get me wrong. Simulation has been for decades, and will continue to be, an important component of functional verification. Still, it is important to recognize the inherent limitation of simulation. In fact, there is a myth common among many engineers that the process of simulation scales. The reality is this notion is false.
Note that many of today's design blocks are approaching the same size and complexity of chips years ago for which the methodologies were originally developed. To dig deeper into this myth, let's examine the inherent limitations of simulation, which can be broken down into three components: verification starts late, stimulus generation, and simulation runtime.