The next generation of verification tools may be less about simulation or formal analysis, and more about managing and merging data sets, scheduling resources and mining data sets. If that list sounds familiar, it could be because what we are really seeing is the gradual convergence of verification with enterprise computing.
Perhaps the world of SoC verification gives the appearance of gradual and organized development. But the real story, as it emerges from an interview last week with Mentor Graphics DVT Division general manager John Lenyo, looks more like the modern view of evolution: sporadic change along many different axes, driven not so much by a master plan as by the crushing and impersonal weight of every-growing complexity. Yet verification teams can, and must, manage that evolution.
Lenyo suggested a number of areas in which he saw shifts in SoC verification practice. Change is influencing the way design teams formulate design requirements. Shifts are also rippling through the process of verifying the design against those requirements and isolating bugs. And there is slow but irresistible change in the way teams manage data and assess the state of the verification process.
"We’re finding that in real life people aren’t very good at creating, documenting, and communicating requirements," Lenyo observed. Part of the problem, he said, is that design teams are geographically dispersed and not synchronized in time. But another factor is simply that turning knowledge of what you want to design into traceable, verifiable requirements is intellectually very difficult.
This can be especially true if the verification plan calls for assertions. "There is growing interest in assertions," Lenyo said. But getting from requirements—typically in English prose from many different sources—to assertions in a system-level language is still a mainly manual, highly skilled, and artful task.
There are some recently-announced tools from some vendors that can help, and can even infer assertions from SystemC code in specific situations. "For instance, at clock crossings we can automatically generate assertions about how the signal crosses the clock boundary," Lenyo explained.
"And we are gradually extending auto-generation to other functional elements of the design, such as formally examining unreachable or stuck states, or analyzing propagation of X-states." For more general functional assertions, "designers often know where the risks are," Lenyo said. So the task is to build assertions around the things the designers know might go wrong.
Testing the design is the next challenge. In recent years the fashion in verification has moved from directed test vector sets to constrained random (CR) testing. But Lenyo pointed out that even with tight constraints, random testing frequently overtests some areas of the design. "We have found that a bolt-on tool that simply removed redundancies from CR tests, based on a graph analysis, can achieve a 10X reduction in test time without reducing coverage," he said.
There are other efforts to improve speed as well. One is exploiting the synergy between multicore computing and the growth in SoC complexity. As SoCs get larger, Lenyo pointed out, they tend to become naturally partitionable into modules that work independently enough that their interactions can be modeled at the transaction level instead of the switch level. This makes it feasible to simulate different modules on different CPU cores. But testbench developers need to be in on the game. Lenyo warned that a test mode that created high traffic between modules could undermine the partitioning scheme and cause simulation times to explode.
Beyond multicore lie only a few alternatives to reduce verification time: static analyses such as formal verification, hardware acceleration, and moving to higher levels of abstraction. All are growing in use.
But however the verification team attacks the speed problem, another issue looms: managing the deluge of data. Ironically, as verification teams work harder to achieve coverage goals, the cacophony of output files and coverage metrics from the array of tools makes it increasingly difficult to understand the actual state of the effort. The solution, according to Lenyo, is not so much in the tools as in the process.
Fundamentals like using assertions, CR, and coverage metrics are big steps. Open Verification Methodology (OVM) can improve both the level of abstraction and the degree of reuse in a verification flow. But productivity comes from how you apply the tools as much as what tools you employ.
"We are getting asked to do assessments of customer processes," Lenyo said. Research suggests if a verification team starts out with a set of tools and builds a verification process around it, they will usually increase their costs by 6 to 9 percent. But if they start by designing a process, and then populate it with tools, they can save up to 30 percent. This approach is unfamiliar to many verification engineers, but it may be necessary to moving forward.
An organized verification process makes explicit the tasks that normally hide behind notations like "… and then Susan spent all night going through the data." Once the tasks are explicit, they are subject to automation. So the next generation of tools may be less about simulation or formal analysis, and more about managing and merging data sets; scheduling tool, compute, and storage resources; and mining data sets to reach conclusions about the state of the project. If that list sounds familiar, it could be because what we are really seeing is the gradual convergence of verification with enterprise computing.