If you are reading this, then you are obviously interested in reusing verification code. Well, who wouldn't be interested in saving time, increasing productivity and hitting aggressive time-to-market targets? However, effective reuse does not come easily.
Writing reusable verification code takes extra effort and requires engineers to follow some basic guidelines. In many respects, this is no different than writing reusable software modules or maintainable RTL hardware modules. Even with these more established languages, efficient reuse can be somewhat elusive.
It should be noted that Verisity Ltd. has created a process called the e Reuse Methodology (eRM) to help standardize reuse techniques for commercial e verification components (eVCs). While also applicable to other e code, this process represents a major step that formalizes everything from directory structure, filenames and documentation, to using new Specman features in order to facilitate improved component coexistence and cooperation. While successful verification groups will certainly use this process in the future, this article can help groups who have legacy projects and code or are temporarily tied to existing tool versions to get started reusing and creating reusable code today.
Please note that some of the views expressed in this article are philosophical in nature. A literal following is not necessarily intended in all cases, but rather, the intention is to create some thoughtful and upfront consideration about reuse. It will take a commitment from top management and project leadership along with the desire from individual designers and verification engineers not only to provide reusable code but also to accept and trust and utilize reusable code from other sources in order to increase productivity.
1.0 Verification Reuse Introduction
This discussion assumes that everyone can agree that reuse is good and noble behavior. In fact, most engineers utilize some sort of reuse all the time without even realizing it. Everything from reusing particular code fragments, to entire algorithms and procedures or processes, to using existing document formats as a template for new documents are all examples of reuse. Even the simple low-tech example of reaching into your recycle bin to use a piece of scrap paper when trying to diagram a new idea is a good example of reuse. These examples provide proof that reuse is a basic engineering tendency that saves time and increases productivity everyday.
When specifically applying reuse concepts to Specman E verification code, it will be helpful to consider the different types of possible reuse. Each of these reuse types have slightly different issues or have different priorities for the same issue. This will mean that any particular reuse technique or concept may have a different importance depending on how you are anticipate the code will be reused. Since future reuse may not always be entirely predictable, it is recommended to incorporate as many of the subsequently listed reuse techniques as possible.
Verification code reuse migration can take many paths. Some examples (in order of increased level of integration complexity) are:
- From basic module level to chip system level within the same project.
- From 1st generation project to 2nd generation project with modified functionality.
- From project X to project Y, then to project Z, each with possible individual variations as required.
Within these migration paths there may be different reuse strategies. Some examples of these strategies, in order of sophistication:
- Use original code as a template only, and then create a new independent copy of files.
- Modify existing code to handle new cases and functionality. Modifications are not backward compatible as methods may have new parameters or fields/structures removed, added or changed.
- Modify existing code to handle new cases and functionality. Modifications are specifically backward compatible so that old cases are handled in an identical manner in the default case with no changes required.
Any verification reuse migration and path combination is possible. Use of source control, project leadership and schedule, and anticipation of reuse potential will all affect the development and change strategy possibilities. In general, the more reusable the module needs to be, the more effort and consideration needs to be invested. Some tradeoffs may be required to balance the verification effort and investment against the anticipated reuse potential.
2.0 Verification Code Reuse Development
Typically, a number of verification components can be designed and created for each of the Device Under Test (DUT) modules. To optimize reuse, the verification components should be aligned logically with common buses and functional blocks of I/O ports. Some of the more common verification components are as follows.
- Bus Functional Model (BFM) driver component. This component will proactively drive and receive interface signals via the signal interface primarily based on E stimulus commands. This component is sometimes referred to as a master BFM.
- BFM receiver component. This component will reactively receive and drive interface signals based primarily on other interface signals. Any e commands for this type of component are generally limited to configuration and/or behavior mode settings only. This component is sometimes referred to as a slave BFM.
- Protocol checker component. A passive component that looks for and reports interface protocol violations. These components are typically created for the more common and complex interfaces, and frequently report errors by referring to the specific specification violation. These components must never drive any signals or influence any test actions.
- Data extractor component. A passive component that detects whenever particular interface accesses are complete, and then grabs and saves data values for verification use by other checkers in the system. Typically these components contain all of the complexity of how and when to grab the values, and then save the important access values in a data list (or scoreboard) for verification by other components.
- DUT checker component. A component that verifies the actual functionality exhibited. A checker component could range from a simple 1-method checker to a multiple-file hierarchical checker that verifies multiple facets and functionalities of the DUT. Checkers can verify at least two different aspects of DUT operations. Did the correct operation happen? Did it happen in the correct time window?
Sometimes even verifying correct operation requires a cycle-accurate checker. In this case the checker must know the exact state of the DUT to know if the operation is correct or not. This gets more difficult if the exact state can change via an independent operation. In this case the checker must predict the exact internal state of the DUT in order to accurately predict the exact external operation.
This checker may use one or more data extractors to trigger checks when accesses are detected. It may also include cycle-accurate modeling and DUT register mirrors in order to predict and verify the exact responses for every clock cycle. When required, this type of checker may also probe into the DUT module itself, for either internal checks or to help maintain mirror status.
- DUT monitor component. A component that keeps track of all defined monitor points during the course of verification testing. This component uses the specific Specman cover syntax and can be used to tell exactly how much functional coverage has taken place over the course of the simulation test suite.
Each of these components may exist separately or be combined depending on the complexity of the DUT. However in general, it is usually better to keep these components separate, so that future verification can be completed utilizing these components as building blocks. In general, any of the following factors will increase the utility of keeping the components separate: greater complexity of the checking algorithms, a more commonly-used bus or interface method or a great likelihood that the modules will be reused in future projects.
Keep in mind that BFM component types will generally be discarded when moving from individual module testing to higher-level system and chip testing. They will be replaced with actual RTL modules as the system is built up. Of course, exterior pin BFMs will remain. In most verification environments, the protocol and DUT checkers and extractors will be desired at all levels of verification. It is very common to maintain the same checking ability at both module and chip level testing. Therefore the verification structure should be designed so that some components can be easily instantiated and reused at the various levels while others can be easily omitted.
Initially, monitor component reuse is also desirable at the various verification levels. While this may work in some cases, often it is discovered that fully populating a monitor at the chip level is a lot more difficult than at the module level. This is because greater control and flexibility of the module ports can be obtained at the simpler level. While the same monitor component at the system or chip level may provide some insight into chip level coverage, it may also show how difficult it will be to exercise all of the corner cases a module was designed for, within the confines of a system level test. It may be better to design specific chip level monitors, consistent with reasonable expectations of system usage and data traffic or to adjust the expected coverage goals when at the chip level.
Reusing verification code from module level to the chip level is now the expected norm. This level of reuse is often the simplest way to get started and provides the most immediate payback. If modules are reused (or expected to be reused) in newer generation products or in completely different projects, reuse can often still be realized. In these situations, both RTL modules and the corresponding verification components may need some amount of functional modifications first, but reuse is these situations can provide a substantial savings in both time and resources.
3.0 Verification Code Reuse Change Strategy
Once any reusable or potentially reusable verification code has been created, on-going maintenance issues need to be dealt with. Is the new verification user expected to just grab a copy of the existing files and use them with whatever modification is necessary?
Should the engineer use only the base verification components and then extend structures and methods to develop the different functionality or code structure required while leaving the original base code intact? There are also numerous solution possibilities that fall somewhere in between these examples. Things to consider -- is it acceptable to have lots of independent code and files that are similar but not identical to each other? How easy would it be to implement a change to some baseline behavior so that all verification code gets the change? Do some projects want to be specifically isolated from other changes that may be currently taking place?
If the verification code is tied to some RTL module, then the answer to this question should be the same for both the RTL code and the verification code. Presumably any changes made to RTL functionality will require analogous verification changes and that these changes must be maintained as a matching set. Projects using any kind of source control tool will generally check files out and in with the appropriate changes. A final label or tag will then be used to identify all of the changes that go with this set. Any project wishing to reuse the module can simply pick the most appropriate version set or simply take the latest version of the module and verification code set.
The previous example seems intuitive and straightforward. But what if the verification code set uses some basic verification components also used by other modules? What if these verification components need to follow a separate specification that describes a common interface or bus? These types of situations call for a more sophisticated reuse strategy. In these cases, it is especially important to structure the components so that users can utilize the components as is, or can easily extend methods and structures as needed.
If any portion of the interface is optional, the component should be designed to handle all possible options of the interface to minimize unique project effort when reusing the components in specific configurations. This will help ensure a consistent and hopefully error-free implementation by the component expert instead of relying on individual project personnel to make these types of changes.
When actually adding new functionality, it is generally a good idea to make all changes to the components fully backward compatible. This will allow users to upgrade to newer versions of the verification components without having to change existing tests and testbench configurations. It is reasonable to force the users to add new control code to configure and use any new features, but not to maintain their status quo. This strategy will allow users to easily update to newer versions of the components more often so they can take advantage of any model corrections or robustness enhancements, and only use the optional new features when needed.
4.0 Specific Verification Code Reuse Techniques
The previous sections describe general reuse concepts and strategies. In this section, specific reuse examples and issues are discussed in the following list. Please refer to the accompanying PDF file to view code examples associated with each numbered point below.
Port path techniques
1. Handle all hardware connections with ASCII strings and ('string') dereferences instead of hard coding HDL signal names in the verification components. Also use the Specman built-in hdl_path() construct to handle HDL signal paths. This implies using units instead of structs for components and allows for multiple instantiations and maximum connection flexibility.
2. Try to keep all connection path constraints together in one place in one file, even if this conflicts with an aspect-oriented programming style. This will not only help reusability, but also fosters improved maintainability and debug ability.
3. Try to connect to module port names instead of interconnecting signal names. These names remain constant at the different verification levels. Remember that internal node connection names may change if verifying at gate level.
4. Try to have the component perform some simple checking for all required connections right at the beginning of the simulation. This may include driving initial values and sampling all signals initially. This will avoid simple connection errors not being reported until lots of test cycles have been run.
Component configuration techniques
5. Some interfaces or common busses have optional signals. A particular implementation may or may not include the signal and corresponding function. Design the components to accept the possibility of having these signals connected or not (not connected means the ASCII string is null and has no assignment). If possible, the component should also self-configure and modify its functionality to match the implied configuration based on which optional signals are connected.
6. Some interfaces or common busses have configurable features that cannot be ascertained by connection options. Use a constrained configurable field to control this type of selection. If the functionality difference is great, consider using conditional extensions based on this field value to modify any TCM sequence. Note that TCM stands for Time Consuming Method, which is a basic logic block in Specman that is analogous to a VHDL process.
7. Whenever possible or practical, allow checkers to enable/disable individual checks or aspects of the checks they make. This will allow easy reuse when these particular checks do not make sense for specific projects. This will also allow specific tests or portions of specific tests to temporally disable checks when purposely causing errors.
Component architecture techniques
8. Allow/expect multiple instances of all e verification components -- provide unit id, string id, or even a type id to allow a user to make each instance unique during instantiation. Remember that a type id is required for conditional extensions and that these extensions are not dynamic throughout the test.
9. Take special care when rewriting long TCMs with an "is only" construct, especially when most of the code is the same and only minor changes are needed. This situation will cause the modified TCM to miss any updates to the original TCM, and will have to be identified and changed manually. It is better to rewrite the original TCM with additional leaf methods that encapsulate the exact functionality that needs to be changed for the new situation.
10. BFM driver and receiver components should always be separated from extractors, checkers and monitors so they can be used at the module level, but excluded at the subsystem or chip level when various RTL modules are interconnected.
11. Keep data extraction functionality separate from DUT checker functionality. It is very common to initially design these two functions at the same time in the same unit. Also keep checker and monitor functionality in separate files as well. It is not uncommon for monitors to have dependencies on the checker and for the checker to have dependencies on the extractors, but not the other way around. This rule is especially important when multiple checkers (or multiple TCMs of the same checker) use the same data extractor or when the project requires that checkers will always be used, but monitors will be selectable.
12. Consider cycle-accurate checkers carefully. Sometimes it is necessary for the checker to maintain a cycle-accurate mirror register or internal state in order to accurately predict proper operation. While it is acceptable to monitor internal signals to help keep track of required states (and to possibly verify them as well), a checker should never force external or internal signal values. Leave the forcing of signals to the BFM components. Remember that cycle accurate checkers will have a very tight linkage to the RTL module, which implies that any change in timing due to a RTL change will require a corresponding e code change.
13. Consider reset handling early. There are some techniques that try to handle resets by killing threads and methods when resets are asserted, and then starting them when reset is de-asserted -- but not all resets are created equal. Sometimes a module will have a global and a local reset, or a requirement that certain operations must be stopped cleanly in case of a reset condition. These cases must be handled uniquely. Also consider that some signals that can act like a reset but are really not (such as special emulation or test modes) and require special handling.
14. BFMs (stimulus drivers) should be able to create and drive random stimulus as well as specific directed stimulus. The directed stimulus should also allow the test writer to purposively cause errors and faults in order to verify DUT handling. However, the default constraints on the random stimulus should only produce valid data accesses. This will also allow for some BFMs to drive specific cycles while other instances drive random cycles in the same simulation.
15. Consider creating a separate TCM in any BFM that drives RTL signals. This TCM can be used as a synchronizer to drive signals at a specified delay time that is not coincident with a clock edge. The BFM logic can drive the internal variables as needed throughout the code, and then the synchronizer TCM can sample these internal variables and drive the signal ports in one convenient place.
16. When modifying/supporting features, careful consideration needs to be given to portions that may be added/subtracted in different designs. For instance, if a bus checker handles four types of accesses, consider the possibility that someday a fifth will be added. User defined types and case statements for these types normally produce effective self-documenting and easy-to-modify coding styles.
17. Keep all project specific code and functions out of the core code to be reused. Instead use a common but separate file for all constraints and configurable code fragments for unique project modifications. This aspect-oriented approach allows for easy identification and maintenance of all modifications without affecting the standard base code.
Error handling/reporting techniques
18. Whenever possible, report protocol errors as clearly as possible with either the exact numbered specification rule id or create your own numbered rule set.
19. Reserve reporting DUT errors for actual erroneous DUT behavior. Consider using a DUT warning for accesses that are technically illegal but handled correctly by the DUT module. For instance: a logged warning equals bad stimulus, which may or may not have been created on purpose.
20. Use Specman's built in error reporting feature whenever possible. This feature will report the any user message as well as the exact file and line of code, plus it will allow you to define and customize handling of different classes of errors. For example, you could define: 1 class of errors that are fatal, a 2nd class that are DUT errors, and a 3rd class that are test sequence warnings. Of course, you can also configure how the simulator will handle the error, such as: Fatals stop the test immediately, Errors get counted but the test is not stopped, and Warnings get counted separately but the test is not stopped. The test passing criteria could be number of errors must equal 0.
21. Avoid having long multi-cycle expect statements that report a single generic error whenever a complicated sequence does not occur as expected. These make it difficult for non-experts to ascertain the exact cause/step of the error.
Testbench architecture techniques
22. Sometimes it may be advantageous to build an e testbench structure that parallels the corresponding RTL structure. With each level of e, you're assigning and appending its RTL instance path name to the HDL_PATH structure. This makes entire subsystem reuse (both RTL and e components) very easy to handle.
23. When trying to verify parameterizable RTL, use Specman to read in generics or parameters and then self configure the E testbenches and tests to appropriate configurations. This may involve the components or testbenches actually "building" signal string names themselves based on the detected configuration.
Import file techniques
24. When importing e files, always try to use a consistent pathname convention. There are at least two different conventions that could be used and the decision on which to use should be based on your intended project file directory structure. If reusable modules will be kept in some standard dir or library (preferred), then it would be better to always use full pathname syntax for all import statements. If reusable modules are kept (as snapshot copies or links) in each individual project directory, then it would be better to use relative pathname syntax for all import statements. Either way, a consistent convention will ensure reusability without import statement modification.
25. Try to have each reusable e file import all required files itself instead of relying on some previous file to import all files as a group. This will help illustrate actual file dependencies as well as allow Specman to handle any cyclic dependencies. It will also help each file to stand alone in the case of a partial-reuse situation.
5.0 Conclusions and Summary
As verification reuse opportunities continue to grow, and schedules continue to shrink, more and more engineers will need to both create reusable-friendly code and reuse other code as well. Inevitably, it will become the normal and expected way to verify all ASIC designs. Just as the tools and languages have evolved into allowing this to happen, specific reuse techniques will also evolve to make it able to happen more efficiently.
This discussion, and others on the same subject, can provide a solid foundation -- but the real improvements (and productivity gains) will be the result of well designed verification code and components that have had reusability techniques implemented initially.
Peter Spyra works as a verification consulting engineer for Integre Technologies in Rochester, N.Y. His recent project work includes multi-processor SOC verification for a major foundry.