Commercially available HVLs such as VERA, Specman and SystemVerilog have focused on capturing verification intent in terms of constraints for constrained random generators. In this approach, the constraint solver is used to randomly set stimulus values for each input port that satisfies the constraints. The challenge with this approach is that a significant amount of control code must be written to create the right set of constraints to be applied for each scenario to generate test cases of interest. The verification engineer is responsible for ensuring that all cases of interest have been covered, and there is no effective way to ensure that the cross-product of the required verification intent has been captured. In this example, a HBA is examined which utilizes protocols that include TOE, iWARP, and iSCSI to implement a 1G/10G HBA.
The design contained 15 million gates of unique logic (not including memory), 9 CPUs (Tensilica), a PCI Express interface (host side), a memory controller and Ethernet MACs (wire side).
1. Traditional HVL Environment for Verifying HBA.
Click here for a larger version
In the verification of this design, a traditional HVL environment was used (see Figure 1) and it included third party Design and Verification IP (BFMs for PCIE and 1G/10G MAC) and a DDR Memory model.
This basic testbench was constructed quickly (a couple weeks), but to complete the verification environment, layers of control code were needed to direct the generation, checking and coverage details for the design (Figure 2). This effort took six calendar months (two man-years) and 190,000 lines of code to create. After this was created, the verification team of twelve engineers was able to begin the verification process.
To provide a more comprehensive verification effort on this design, this team eventually wrote another 165,000 lines of code of directed tests to hit deep state cased as well as corner and error cases that were missed by the constrained random generator in verification environment. In all, about 350,000 lines of code were written by the internal verification team to verify this design.
2. An Example of a Layered Testbench.
Breker Verification started creating the Coverage Model Graphs of the design using Trek at the same time as the traditional HVL team. Within two weeks, one Breker engineer was able to incrementally construct a portion of the Coverage Model Graph and began finding bugs in the design. In all, about 17,000 lines of graph code were incrementally written over five months to verify the design to the same level as the HVL process.
Constructing the Coverage Model Graph involved understanding the design, but was very straight forward to implement. The design functionality was recursively decomposed into lower level functions until the sub-function could be written in graph form (Appendix). Once a portion of the graph has been created, Trek would compile and read in the graph and traverse the sub-functions to create scenarios that were submitted to the testbench (see Figure 3).
Since all information related to the functionality of the design was included in the graph, Trek was able to define expected outputs or results for each scenario generated. After simulation, the results were fed back into Trek and were compared against the expected outcomes. The coverage information was annotated onto a visual "Hot-Spot" rendering of the graph. Since Trek generates self-checking scenarios, the need to architect checkers and scoreboards for the design was eliminated. This significantly reduced the effort needed to create the testbench environment.
1. Architectural View of the Trek Environment.
Click here for a larger version
The value of Trek is that it can generate deep state scenario from the Coverage Model and guarantee coverage closure. Trek connects into the low level Testbench at the BFM level or at the wire level using a C-API. When the BFM or the simulator requires a new scenario, a call to Trek is made in a similar fashion to how a directed test case is submitted to the testbench. In this example, Trek with one engineer found over half of all the bugs in this design while the verification team (12 engineers) found the remaining bugs.
Appendix: IP Header Format
To generate an IP packet, the format can be decomposed into the following fields.
Example IP Packet:
7E FF 03 00 21 45 00 00 CB 04 63 00 35 DA 00 01 01 00 00 01 0B 81 7E
A graph of this packet information (see Figure 4) is constructed using three types of nodes: Sequence nodes (Blue boxes) which execute code contained within the node and then executes all children in order (top to bottom), Select nodes (Purple diamonds) which execute code contained within the node and then executes one of the children, and Leaf nodes (Green oval) which execute code contained within the node only. Below is a graph of the IP Packet.
4. Graph of the Information Packet.
If the IP_PKT_Addr is examined, we see that the address is two bytes (FF 03 in Example). The graph shows addresses that could range from small (00 00 to 77 00) to mid-range (77 01 to AA 00) to large (AA 01 to FF FF) so a scenario can be generated for an IP Packet with varying ranges for addresses defined from a small graph of possibilities.
About the Author:
Adnan Hamid is the Chief Executive Officer & Founder of Breker. Prior to Breker, he managed AMD's System Logic Division. He received BS degrees in Electrical Engineering and Computer Science from Princeton University, and an MBA from the University of Texas at Austin.