Perhaps the world of SoC verification gives the appearance of gradual and organized development. But the real story, as it emerges from an interview last week with Mentor Graphics DVT Division general manager John Lenyo, looks more like the modern view of evolution: sporadic change along many different axes, driven not so much by a master plan as by the crushing and impersonal weight of every-growing complexity. Yet verification teams can, and must, manage that evolution.
Lenyo suggested a number of areas in which he saw shifts in SoC verification practice. Change is influencing the way design teams formulate design requirements. Shifts are also rippling through the process of verifying the design against those requirements and isolating bugs. And there is slow but irresistible change in the way teams manage data and assess the state of the verification process.
"We’re finding that in real life people aren’t very good at creating, documenting, and communicating requirements," Lenyo observed. Part of the problem, he said, is that design teams are geographically dispersed and not synchronized in time. But another factor is simply that turning knowledge of what you want to design into traceable, verifiable requirements is intellectually very difficult.
This can be especially true if the verification plan calls for assertions. "There is growing interest in assertions," Lenyo said. But getting from requirements—typically in English prose from many different sources—to assertions in a system-level language is still a mainly manual, highly skilled, and artful task.
There are some recently-announced tools from some vendors that can help, and can even infer assertions from SystemC code in specific situations. "For instance, at clock crossings we can automatically generate assertions about how the signal crosses the clock boundary," Lenyo explained.
"And we are gradually extending auto-generation to other functional elements of the design, such as formally examining unreachable or stuck states, or analyzing propagation of X-states." For more general functional assertions, "designers often know where the risks are," Lenyo said. So the task is to build assertions around the things the designers know might go wrong.
Testing the design is the next challenge. In recent years the fashion in verification has moved from directed test vector sets to constrained random (CR) testing. But Lenyo pointed out that even with tight constraints, random testing frequently overtests some areas of the design. "We have found that a bolt-on tool that simply removed redundancies from CR tests, based on a graph analysis, can achieve a 10X reduction in test time without reducing coverage," he said.
There are other efforts to improve speed as well. One is exploiting the synergy between multicore computing and the growth in SoC complexity. As SoCs get larger, Lenyo pointed out, they tend to become naturally partitionable into modules that work independently enough that their interactions can be modeled at the transaction level instead of the switch level. This makes it feasible to simulate different modules on different CPU cores. But testbench developers need to be in on the game. Lenyo warned that a test mode that created high traffic between modules could undermine the partitioning scheme and cause simulation times to explode.
Beyond multicore lie only a few alternatives to reduce verification time: static analyses such as formal verification, hardware acceleration, and moving to higher levels of abstraction. All are growing in use.
But however the verification team attacks the speed problem, another issue looms: managing the deluge of data. Ironically, as verification teams work harder to achieve coverage goals, the cacophony of output files and coverage metrics from the array of tools makes it increasingly difficult to understand the actual state of the effort. The solution, according to Lenyo, is not so much in the tools as in the process.
Fundamentals like using assertions, CR, and coverage metrics are big steps. Open Verification Methodology (OVM) can improve both the level of abstraction and the degree of reuse in a verification flow. But productivity comes from how you apply the tools as much as what tools you employ.
"We are getting asked to do assessments of customer processes," Lenyo said. Research suggests if a verification team starts out with a set of tools and builds a verification process around it, they will usually increase their costs by 6 to 9 percent. But if they start by designing a process, and then populate it with tools, they can save up to 30 percent. This approach is unfamiliar to many verification engineers, but it may be necessary to moving forward.
An organized verification process makes explicit the tasks that normally hide behind notations like "… and then Susan spent all night going through the data." Once the tasks are explicit, they are subject to automation. So the next generation of tools may be less about simulation or formal analysis, and more about managing and merging data sets; scheduling tool, compute, and storage resources; and mining data sets to reach conclusions about the state of the project. If that list sounds familiar, it could be because what we are really seeing is the gradual convergence of verification with enterprise computing.
eewiz: I agree that cloud computing needs to be applied to EDA design methodologies. A few brave EDA vendors have dipped their toes in cloud computing waters, including Cadence. Just need to address the security and business model issues. Can my design stay mine and can my company make money in the clouds?
I feel the EDA tool usage pattern is best suited for a cloud computing model. This will help to speed up verification time & reduce the initial investment & EDA tool licensing cost for the companies. Any thoughts on this?
Enterprise computing and collaborative tools are indeed becoming essential in verification. @Ron: I might add that reduction in verification time can be achieved with reuse of pre-verified blocks(as a mandated discipline and not as an occasional approach) and building re-configurability (and de-configurability) of interfaces by design. These also have the added advantage of forcing some thinking in formalizing the specs and making decisions on fit for the target application and for the design intent.
A fundamental problem with design is that it's not in English. It is in machine code. The errors come in the translation and the inability to identify all the corner cases in English. English is inexact, takes too much time and is very tedious. All requirements should be in C-like forms to reduce ambiguity and equivocation. A tool that takes C-code and translates it into verification assertions is helpful, but the way people approach their C-code requirements needs to be structured to fit the interpreter. That’s the rub. Engineers need to know the interpreter’s peccadilloes and incorporate it into their code, not the other way around. An interpreter is only as good as the engineer that uses it. External auditing of the acceptability of C-language statements relative to the interpreter could be helpful, but on leading edge design, the audit team has designed the interpreter for problems they know about, not ones that are being presently created. Maybe stating the obvious, but leading edge designers are often left to the task of debugging the tools purchased to help them and training the auditors in the process. Brings into question, who should be paying who?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.