In Part 1 of this panel discussion on verification, the experts talked about the state of verification and the progress that has been made.
In Part 2 the subject broadened to the increasing areas in which verification has to be performed, including power and software. In this third part the discussion turns to the how far formal methods can go in addressing the challenges facing SoC design.
Taking part in this discussion are: Paul Martin, senior manager of debug, trace, and performance modeling at ARM; Rajeev Ranjan, CTO, and Oz Levia, vice president of marketing and business development at Jasper Design Automation; Harry Foster, chief verification scientist at Mentor Graphics; and Viresh Paruthi, senior technical staff member at IBM.
EE Times: How do we tackle things like performance and power?
Rajeev Ranjan: If you talk about power, I think there are numerous problems at the hardware level. We talk about coverage closure being an issue today, even without power-aware design. If you have power-aware design, you're talking about a simulation coverage space covering a multitude of power domains. Last year a customer presented at our users group. He had a design with 20 clock-gating domains. They couldn't even think about using brute force methods to exhaust all of the corner cases. Using the formal approach, you can make the design power-aware and do exhaustive formal power-aware verification of the functionality. That's the only way I see the solution.
Harry Foster: I agree. When you can do formal you should do it, but going back to the OS aspect and the software aspect...
Paul Martin: There are two distinct issues. There is a system validation problem and an integration problem. At a lower level, those are basic integration tests, and you need to verify that the power logic is correct.
Ranjan: I agree. Not just that the power logic is correct but that in the process of introducing low-power implementations you have not corrupted the original functionality. So, functionality keeps coming back to you. You could do power optimizations and performance optimizations; you could make your fabric coherent because of performance; but you still need to verify the functionality.
Martin: That is still two distinct domains. There's an integration-level problem and a system-level problem, which is: How do we validate that we delivered on the system-level specification?
Oz Levia: Every problem like this in the past was solved by introducing two things: One, limiting the degrees of freedom; and the other, creating methodologies and techniques around the degrees of freedom that were left.
Foster: Or raising abstraction.
Levia: Right, if you look at it, a standard cell is not the most efficient way to design, but it reduces degrees of freedom and enables a methodology that supports three or five billion transistors on a chip. People are going to have to agree to give up some degrees of freedom and agree on methodologies -- be it formal verification, virtual platforms, executable specifications, the standardization to some degree on fabrics, or the standardization on programming techniques -- such that the people who write applications, who are making the communications fabrics, and who are verifying these things, are speaking the same language so that it can be verified.
Martin: Also ensuring that all of those pieces are reliable before you bring them into the system. It's about prevention rather than cure. A lot of faults are found in final system validation, where debugging them is very labor intensive. It's about making sure all the pieces are fault-free before they go into final assembly.
Foster: Yes, it's too costly. At a system level it's painful. If you can find something with formal at the low level...
Martin: Strategies for bug avoidance make sure designers developing individual blocks come in fault-free.
Viresh Paruthi: If you look at data from our projects, we have driven innovations and methodologies so that the lower-level blocks are very clean, and we are seeing more of our problems at the integration level. We need to continue those practices, start at the bottom, and make our way up. Formal is a great aid. Pretty much everyone uses it extensively these days. But we need to keep brainstorming on the next level of challenges, which is the system level, where we need to think about increasing the capacity of the virtual prototyping backbone, and we have to stress it in different ways.
Ranjan: Absolutely. We are extending the capacity of formal tools and how they are applied at the system level, and to reason about functionality. Take, for example, security. Formal technology is getting to the point where you can apply it to ensure that no rogue agent can access or corrupt data.
Paruthi: Absolutely, it's abstraction. Extending the approach to protocol verification, or security path verification...
Levia: It's not unimaginable that in the very near feature, or the medium term, it will be possible to apply formal to software as it applies to SoCs. Software is just a bunch of steps; it has state space; it is more general, more complex. Not to be too self-serving, but Jasper has taken the static formal methodology up from the block level, into the subsystem level, and into the system level. We now do full chip connectivity and functional verification.
* * *
Readers: How much of the verification task do you think can be attacked with static formal methods?