Let's look at a variety of "Design for Verification" techniques aimed at reducing bugs and making designs easier to verify.
After years of innovation in verification of increasingly complex designs, should we now turn our attention to the design process itself? Since starting in verification in the early 1990s, I have witnessed the introduction of code and functional coverage, constrained random, assertions, numerous metrics, formal verification, various languages and methodologies, and, more recently, Requirements Driven Verification. However, we have never considered in detail how to reduce the number of bugs that get introduced into the designs in the first place and ways to make them easier to find.
Assertion-based-verification is well established, with assertions potentially being our highest return on investment (ROI) in verification: They add checks to help detect failures and reduce debug time to root cause the failure. However, we can increase that ROI even further by assertion-driven design. As the industry has moved to specialized verification techniques owned by specialist verification engineers, a "design bring-up gap" has opened where designers write RTL but do not have a simulation environment. Many designers close the gap with their own test bench that eventually gets thrown away.
However, if designers wrote assertions during RTL development, formal tools could enable them to generate the simulations they need. Also, during RTL, developers make many assumptions that are captured in comments at best or forgotten at worst. However, by writing properties, they can capture them as executable specifications that can eventually become assumptions or assertions. This creates a training requirement, but too many training courses focus on the assertion language, rather than assertions that add the most value.
The problem with corner cases is that they are, by definition, hard to hit. Queues and FIFOs have to be full, counters have to roll over, buses have to be busy, external events must occur simultaneously, etc. So can we make those conditions easier to hit? The external conditions (e.g., busy buses and simultaneous events) are controlled by the test bench, but internal conditions could be made easier to hit by reducing the size of a queue, FIFO, or counter. This needs to be considered up front, for example through parameters, as changing design size might break other aspects of the design and cause false bugs. Some argue that using such "design mutations" is not valid for coverage closure but would be happy to use them for soak testing.
Some other DfV techniques are more obvious but still not followed. Endless research shows that the cheapest way to find bugs is through code reviews, but takeup remains stubbornly low. This may be due to the image of engineers sitting in a room poring through code printouts. However, automatic lint checkers can easily check for compliance to coding rules and generate code quality measures such as complexity metrics. This should leave reviewers to focus on higher-level considerations such as algorithmic or architecture implementation decisions.
As SoCs have moved to multiple cores with complex, shared memory systems, the scope for architectural design errors has increased. However, there is a lack of languages for capturing architectures in a way that can be analysed for issues such as quality of service, starvation, livelock, and deadlock.
Trying to find such issues through RTL simulations can be virtually impossible. For example, a livelock condition may occur only when processes follow a particular repeated pattern to continually request resources which might be hard hit in a constrained random scenario. Formal verification might be able to detect it if the design size is not too large and a full set of constraints can be written.
So even though the separation of design and verification teams has improved design quality, has it introduced some inefficiency? And as design complexity increases, will the verification teams be able to cope without better design collaboration? Techniques such as "assertion-driven design and design mutation" described above are good examples of collaboration, and improved code review processes will capture bugs easier and earlier, when they are cheaper to fix. However, there is also a need for ways to capture architectural designs for improved analysis.
Bryan Dickman, director of design assurance at ARM, will consider Design for Verification in his talk "Can we train our designers to avoid bugs?" at Verification Futures in the UK (or by remote access) on Thursday, Feb. 5.
— Dr. Mike Bartley, founder and CEO of TVS, has a PhD in mathematics from Bristol University, an MSc in software engineering, an MBA from the Open University, and more than 20 years of experience in software testing and hardware verification. He has built and managed state-of-the-art test and verification teams in a number of companies, including STMicroelectronics, Infineon, and Elixent/Panasonic, which still use the methodologies he established. Since founding TVS, he has consulted on multiple verification projects for respected organizations and has had more than 20 articles published on the subject of verification and outsourcing. Contact him at firstname.lastname@example.org.