DAC begins in a few short days in Fog City, otherwise known as San Francisco. Gary Smith EDA has weighed in once again with a comprehensive list of what to see at this year's conference...
It’s that time of year again, as Design Automation Conference (DAC) begins in a few short days in Fog City, otherwise known as San Francisco. Gary Smith EDA has weighed in once again with a comprehensive list of what to see at this year’s conference as well as his Wallcharts that categorize different types of EDA tools.
Verification tools are well represented on both lists and fall into categories that include design debug and transaction-based emulation. A Wallchart category labeled “Electronic System Level (ESL) Verification” lists a sub-category called “Intelligent Testbench” that begs for a definition.
EDA DesignLine Editor Brian Bailey defines the intelligent testbench as a language, tool or methodology that allows complete verification of the system-level design without the need to perform more verification than necessary. That’s a broad definition to describe most tools lumped in this category.
Lifting the fog
The usual concept of the intelligent testbench is some sort of layer on top of a traditional testbench following the Universal Verification Methodology (UVM) or a similar constrained-random approach. As Bailey has noted, constrained-random stimulus fails when it comes to efficiency, while directed testing fails when it comes to completeness. The typical intelligent testbench tool tries to remedy this by directing constrained-random stimulus in a way that will converge more quickly to desired coverage results. Most of the EDA tools groups in Gary Smith EDA’s category work this way.
Even with this “intelligence,” existing methodologies do not scale well to the system level, especially for complex system-on-chip (SoC) designs with embedded processors. No testbench-based methodology can do an adequate job of full-SoC verification since there’s no link between the testbench and the software running on the embedded processors. Even UVM, while popular and effective for verifying blocks and subsystems, does not provide that connection.
Verification teams try to tackle full-SoC verification with two additional steps beyond the intelligent testbench. One is hand-written, directed C test cases running on the SoC’s embedded processors. These test only basic SoC functionality since it’s almost impossible to manually write code for all the types of parallelism and resource conflict that occur in a complex chip. Verification teams also run production code on processors in simulation or emulation. While an important step, production software tends to be well behaved and does not stress corner cases well. And, this can only happen when software has been completed, usually after tapeout.
The missing step is automatically generated self-verifying C test cases running on embedded processors in simulation linked to a minimal testbench for coordination with the SoC’s inputs and outputs. These multi-threaded test cases are not well-behaved; they are generated specifically to produce conflicts for SoC resources and hit corner cases. They can cycle through power modes and exercise application-level scenarios, simulating efficiently since they run on “bare metal” with no operating system required.
This approach is not an intelligent testbench. Of course, it has intelligence, but verification is driven and controlled by internal embedded processors rather than an external testbench. It does not fit into Gary Smith EDA’s “ESL Co-Verification” category either, which involves running production software on an ESL or register transfer level (RTL) model of the SoC. Although self-verifying tests can run on ESL architectural models as well as RTL code, they don’t qualify for the “ESL Test & Verification” category.
What’s left to do is to create a new subcategory called “SoC Verification” to highlight the special verification challenges posed by SoCs with embedded processors. Tools in this category, such as Breker’s TrekSoC, take a fundamentally different approach to verification by automating self-verifying embedded C test generation, intimately linking hardware and software together. They offer visualization of full-SoC system operation and provide coverage metrics at the level of system scenarios.
Sunny days ahead
With no fog in the forecast, DAC starts bright and early Monday, June 4, at the Moscone Center and offers the opportunity to see a host of new verification tools. SoC Verification will be demonstrated at the Breker booth (#2501). Brian Bailey, who moonlights as a verification expert when not editing EDA DesignLine, will be on hand to offer his perspective on SoC verification at various times during the week.
And, if you happen to see Gary Smith at DAC, give him your vote for a new category of verification tools: SoC Verification. Verifying that the SoC design works as intended is a big deal and the industry needs to clear the fog and rise to the challenge, the system-level challenge that is.
About the author
Adnan Hamid is co-founder and CEO of Breker Verification Systems. Prior to starting Breker in 2003, he worked at AMD as department manager of the System Logic Division. Previously, he served as a member of the consulting staff at AMD and Cadence Design Systems. Hamid graduated from Princeton University with Bachelor of Science degrees in Electrical Engineering and Computer Science and holds an MBA from the McCombs School of Business at The University of Texas.
If you found this article to be of interest, visit EDA Designline
you will find the latest and greatest design, technology, product, and news articles with regard to all aspects of Electronic Design
Also, you can obtain a highlights update delivered directly to your inbox by signing up for the EDA Designline weekly newsletter –
just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a
member you'll be asked to register, but it's free and painless so don't let that stop you).