In this column, Bernard Murphy starts with the "what" and "why" of verification and only then considers the "how."
I have been wrestling with a way to categorize the various types of SoC verification -- not by the underlying tool or methodology used (there is no shortage of web hits on those topics), but rather by the underlying objectives.
I thought perhaps the title of this blog was so obvious it must come up on a search, but nada. What I'm aiming at is to start with the "what" and "why" and only then consider the "how." So here's my take. I'm looking forward to an education from other members of the EETimes community with strong opinions.
What: Comprehensively verifying the functionality of the IP under all possible circumstances (or at least getting decent coverage).
Why: So you don't need to test for correct IP behavior at the SoC/integration level (at least in principle).
How: Constrained-random simulations complemented by static and formal analysis.
What: Quick tests to check the basic sanity of the design and perhaps some components of the testbench (e.g., clock frequencies).
Why: The big verification tasks are time- and resource- consuming. You don't want to launch these only to find (hours later) that you wasted all that time because a clock wasn't hooked up properly.
How: Linting, custom scripts, and specialized simulations.
What: Verifying correct inter-operation of a related group of components (e.g., the CPU, a peripheral, a DMA controller, and on-chip or off-chip memory) all through the bus fabric. Everything else is stubbed out.
Why: You can't possibly verify every conceivable behavior of the whole design, so you divide and conquer. A minimum requirement is that each group of related components like this, taken one group at a time, must operate correctly in normal functional mode (no clock gating, security modes, etc.) when no other traffic is active.
How: Directed simulations.
What: Verifying that the SoC has been hooked up the way it was supposed to be hooked up.
Why: Again, a divide-and-conquer strategy. Aim for some level of completeness in this checking, especially around rarely active signals and a potentially huge number of control states, both difficult to cover in simulation. First, are all connections correct and do configuration controls for these connections function correctly? Second, does integration logic (e.g., clock gating control, control and status registers) function correctly (independent of underlying IP functionality)?
How: Manual review, static and formal checking, some directed simulation to check a few control state options working with functional behavior.
What: Verifying implementation concerns -- synchronization correctness, power intent (UPF) -- match with RTL, DFT hookup, and multi-cycle path constraint correctness.
Why: None of these are well covered by simulation approaches and normally require specialized applications to check. This may surprise you for DFT, but test groups frequently complement simulation with specialized checks, simply because size and complexity (think about BIST algorithms) defeat any attempt at reasonable simulation coverage.
How: Combination of static and formal checking.
What: Verifying software running on the full SoC.
Why: By dividing up verification into complementary approaches, you have defensible justification for having the best coverage you could achieve, but it's still very low absolute coverage. You still have to prove that representative software running on the platform will work as expected.
How: Combination of emulation, FPGA prototype, TLM modeling, running not the final application software but rather verification-directed software tests.
So -- over to you -- what did I miss (or misrepresent)?