This is the second of a two part article. You can find Part 1 here.
In part 1 of this article, we learned how today's SoC designs place specific requirements on a scalable coverage-driven verification solution. In this second and final installment, we'll take a deeper look at how these powerful verification solutions can be constructed, and what other issues you may encounter while designing such a solution. Let's begin with some architectural topics that must be kept in mind when getting started.
Multi-engine verification approach
As we mentioned in part 1, when performing SoC verification you need to ensure that coverage results and verification plans from block-level verification can be used all the way up to complete SoC verification. And while focusing on different levels of the design, different verification engines might be used. For example, at block-level verification, one might use formal techniques, and at the cluster-level simulation based techniques are at the forefront, and at the full SoC level simulation hardware assisted verification could also be used. This implies that a strong verification methodology must have the ability to seamlessly take multiple sources of coverage data from all these engines and allow users to analyze as a whole. This drawing from multiple sources requires that coverage data from the various engines be interoperable, and that the engines themselves are able to produce the data with similar semantics. A well planned methodology must also allow these engines to interact with the coverage data using the same API.
Since today's SoCs are usually formed out of various types of IP coming from many different sources in different languages, we need to make sure the coverage capabilities are able to handle the multi-language world. This means that the coverage data must be reduced to a canonical, language-neutral form so that it can be seamlessly merged and analyzed across language boundaries. This also means that enough hooks should be kept in the coverage data so that language fidelity is retained while showing the data back to the user (i.e. the user should be able to see the data in the appropriate language depending on the context of the IP from which the data originated).
Merging of coverage results is also a very important analysis feature in of a good coverage-based approach. Language agnosticism places even more stringent requirements on merging of data. Not only should the database be able to handle merging across various coverage abstraction types but it should also be able to handle merging of coverage data across language boundaries. The canonical form of coverage in the database becomes a key feature to support this requirement.
Scalable analysis capabilities
In SoCs the volume of coverage data scales very rapidly. This places specific requirements on coverage database and analysis capabilities. The coverage database needs to be capable of scaling both in memory and performance as the data size increases. Storing coverage data for millions of regression runs should be possible. At the same time, various analysis capabilities like merging, crossing, ignoring specific combinations etc. should all be possible in an efficient manner for the data size.
Structured databases should be used for the underlying architecture to service all these requirements. Typical SoCs run very deep in hierarchy and block to system level verification ensures that coverage data is gathered across hierarchy boundaries, even when gathered using different verification environments. Consequently, the database and analysis engine must support merging and scope resolution across hierarchy boundaries. This approach is best served if the solution has a central analysis engine that can be used across hierarchies, language and various forms of coverage.
As IPs travel across group and even company boundaries, verification environments for that IP often travel with the design IP. This can result in verification of the complete SoC using multiple verification engines coming from different vendors, which makes it essential that the coverage data is interoperable across various vendors. This situation implies that:
- The semantics of the coverage data and coverage metrics are standard and consistently understood.
- The coverage data is accessed using a well defined API. This API will also help in multiple engines accessing the coverage data.
Runtime analysis and metric measurement
Most SoC regression runs are time consuming and a complete SoC regression run could take many days. As a result, a good verification methodology featuring coverage should have the capability to score all metrics of the coverage solution " coverage measures, progress against the verification plan, failure analysis data, etc. -- in a real time manner. This capability allows the users to do their analysis without having to wait for the regression runs to finish. The database should also have the appropriate locking and conflict resolution capabilities to enable such real time access.
Based on the above requirements, figure 1 illustrates one possible high-level architecture of a coverage-driven solution that features good planning. As mentioned above, many of the capabilities require the existence of a scalable coverage database with well-defined read and write APIs.
1. A possible architecture for a multi-language, multi-site verification system.
Click here for a larger version
Now that we have talked about the requirements of a good scalable coverage-driven approach and a possible architecture which can meet these requirements, let us look at the problem from a methodology viewpoint. Once we have our solution implemented, it still has to be augmented with a good coverage-driven methodology. This is required since the design and verification requirements are evolving faster than the tools can keep up. Let's take a quick look at some questions that a good scalable coverage-based methodology will answer:
- Prioritize your requirements — A SoC verification project will always raise many requirements so it is critical to prioritize these needs. The user should clearly define which coverage points and goals are more important and whether failure analysis is more important at various stages in your design cycle. This will enable the user to prioritize the collection and analysis of the huge amount of coverage data a typical SoC verification task will generate.
- Have a clear idea of when verification is complete — Completion metrics should be defined clearly. These could be in terms of coverage goals, failures rates, verification plan completion, etc. The main point is that the verification completion criteria should be very clearly defined.
- How do you know if you have enough coverage? — This is always a tricky question, and the only reasonable answer comes from a structured verification plan. Coverage goals should be defined from a verification plan, and completeness of coverage should be defined in terms of whether the verification plan goals are reasonably met or not.
- Ultimate success — The eventual goal of this type of solution is to enable the users to do more verification efficiently. Hence, if the user is migrating to this methodology for the first time, success criteria should also be defined clearly. These could be in terms of number of bugs found per unit time, the total time to reach verification closure, number of resources invested, etc. This will enable users to determine the value and incorporate it further into their verification flows.
As SoC verification places considerable requirements on any solution or methodology because of the complexity and overall management burden, the time is right to investigate newer automated solutions that scale. Today's SoC verification can no longer be met with traditional directed verification techniques that offer limited verification planning and fail to offer usable verification metrics. By properly architecting and implementing a scalable coverage-driven solution we can overcome many of these multi-site and multi-project issues, and take on the next generation of SoC development by leveraging good project planning and valuable metrics.
About the Author:
Apurva Kalia is Vice President of R&D for Incisive Simulation Products at Cadence. He has served on many standards and Industry bodies. He works out of the Cadence office in Noida, India.