It has been mentioned that during the rough-and-tumble days of 1950s Chicago politics, a ward boss asked an Adlai Stevenson volunteer who sent him to help in the campaign and when the response was nobody, he quickly replied: "We don't want nobody nobody sent." It is the same with reuse in that nobody wants reuse no one reuses. If you have to question the reuse benefit, then you should not spend resources making anything reusable. Similarly, no convincing is necessary when reuse becomes a practice weaved into the design process such that when it is built, it gets used with cool efficiency.
Many dogmatic arguments have been waged about the advisability of planning for reuse when project schedules can suffer due to added design delays or increased burdens on documentation. The originator of the reusable element or methodology usually dreads subsequent demands for support by any future user of the reused element. To be sure, many misguided reuse attempts did not achieve the intended benefits of reuse, which should be modularity, efficiency, repeatability, productivity, interoperability, consistency and development cost savings. The desirability of reuse can significantly change depending on the time horizons that a return on investment (ROI) analysis chooses and on the various reuse scenarios that marketing envisions. Reuse of IP in a different context, which would turn one perfectly functioning chip into an inoperable one, has caused many to be cautious when it comes to whole-heartedly embracing reuse. It is undeniable, though, that a methodology including reuse as common practice will garner many benefits in quality and productivity even when no reuse is intended or planned. This is because the reuse discipline tends to sharpen awareness of interoperability and interdependencies while being an excellent communication vehicle among temporally or geographically distant authors or beneficiaries of reuse. Streamlining, breaking the problem into manageable pieces and reducing degrees of interaction complexities are benefits that come to mind when reuse intent and practice are at work.
The reuse imperative (where the developer and implementer are two separate entities) has potentially two separate goals, depending on whether the point of view is from the reuse content developer or the reuse content consumer. The developer perspective considers implementing the most universal and versatile solution to maximize dissemination and sale, while the reuse consumer wants the solution that provides the most differentiation with unique features not easily reproducible and may be willing to give up reuse to maintain the value-add that is hard to reproduce. One way to reconcile those two conflicting necessities is to develop adaptive, reconfigurable approaches that allow single development with multiple, user-modifiable uses. A memory compiler is a perfect example of a design-once, retarget-multiply-without-compromising differentiation. With this in mind, creating environments that lend themselves to reuse and retargeting can provide the optimal solution for reuse and value maintenance while helping productivity. To complete the picture, one has to be aware as to whether the reuse is applied pre-silicon or if the reuse is intended for post-silicon reconfiguring. Field programmable gate array (FPGA) and tunable/register or software programmable options are examples of post-silicon reuse.
Planning for reuse
Reuse can be envisioned at different stages of the design process and is not always a specific deliverable (as in, say, a reusable IP block) but can also be a methodology (interoperable flows or sharable resources) or a technique for reproducible or regenerative work that can be easily retargeted or can be morphed into different incarnations with consistent and repeatable outcomes. One can easily stretch reuse to include time reused or better used to do other things from freed-up resources.
From a practical perspective, reuse can be enabled in the following ways:
- Consistent infrastructure
- Repeatable and interoperable flows
- System design reuse
- Logic design reuse
- Physical design reuse
- IP design guidelines for interoperability and re-configurability
One can also envision degrees of reuse that can be throttled throughout the amount of change that one has to apply to an element or a practice. Without making it sound too much of an oxymoron, one can start talking about flexible reuse or quasi-reuse.
Consistent infrastructure fosters reuse discipline. It can consist of:
Repeatable and interoperable flows
- Configuration management that tracks and organizes work environments.
- Design kits that allow parameterizable symbol elements and built-in tool wrappers.
- Defined and enforced directory structures with automatic project creation and derivative design spawning.
- Tool tracking reporting, and job submission and prioritization.
The basic pre-ordained rule of interoperability states that it should be possible to import the output of any deliverable from anywhere into an equivalent environment or tool and then it should also be possible to export this deliverable anywhere it can possibly be used to continue enabling the design process, making the flow immune to any closed and captive system. Standard input and output formats are to be expected. Open standards (Open Access [OA]), interoperable flows (standard design constraints) and file formats (GDS) are perfect examples of this and one should ask no less if flexibility and reusability is sought. Let us label this the "input-output" reuse imperative.
System design reuse
At the system level, and from a reuse perspective, the first task is to design the electronic system level (ESL) framework that allows architecture and software development, hardware verification and software performance analysis. This can be accomplished through the use of Transaction Level Modeling (TLM). The TLM 2.0 standard allows SystemC model interoperability and reuse at the transaction level.
The second task is to look at partitioning the various building blocks of the platform, and deciding on the configuration that has the most modularity to simplify interfaces and stand up to the verification challenges with the least number of interactions needed to create complex dependencies of behavior and unnecessary handshaking.
When embarking on designing IP for reuse, it helps to look at reusable IP as an aggregate of sub-modules that can be themselves candidates for reuse. Identifying the right level of granularity of the reuse element can help in partitioning modules when various teams can share the task of developing and verifying sub-modules. In Figure 1, if three teams are developing IP1-2-3, each team can focus on one sub-module (say IP1 team develops Block A, IP2 team develops Block C and the IP3 team develops Block D). For this to work, a consistent design environment must exist and specifications must comprehend the use scenarios in all current and future uses.
1. Block reuse inside a reusable IP.
To assess the quality of a particular IP, one can use GSA's risk assessment tool  for hard IP or get VSI's QIP metric  for the soft IP (now donated legacy work of the defunct organization). In considering IP use, the first order of business is deciding whether the IP can be procured from a third-party source or if it needs to be internally developed. Tools such as QCore  can tell you the level of compliance to an internal standard of interoperability and the fit for integration along with the degree of completeness of the deliverables. A good way to do the initial explorations followed by detailed assessments is Chip Estimate's InCyte tool , which provides an analysis mechanism for discovering what is available in what foundry and in which technology node with what size/area/performance metrics. This tool also helps you in cataloging and tracking internally developed IP.
If the intent is to internally develop the IP and target it for reuse, the next challenge is to define the requirements in a portable and systematic way that allows for easy retargeting, even at an RTL abstraction level. Considering that specifications need to be as formal and as crisp as possible, the system specification needs a common language that can define the functionality and requirements of operation. An ESL-based specification, such as the SPIRIT Consortium's IP XACT-1.4 , can help in transaction-level modeling and verification methodologies by allowing bus interface definitions at the transactional-level interfacing and at the RTL-level description. This allows for a seamless ESL to RTL handoff, enables top-down correct by construction and behavioral synthesis bridging of ESL and RTL realms. An IP portfolio can thus be defined, described and transported along with generators and all of the accompanying IP views, from RTL to SystemC or System Verilog TLMs to device drivers.
Logic design reuse
In verification, the use of a consistent methodology of assertions and functional coverage is highly desirable to help promote code and design reuse. Assertions are a statement about the design's intended behavior, and functional coverage provides an excellent measure of the verification thoroughness. These uses help debug failing simulations quicker, and they facilitate communication between RTL design and verification teams.
Coding recommendations for verification
The following practice is recommended in Verification:
- Assertions should describe error/illegal conditions that one never expects to occur
- Assertions must be defined and described in the micro-architecture document (per block)
- Assertions should be defined only for:
Assertions should not be defined for error conditions that may occur in real life, such as:
- Fatal error conditions
- Assumptions that are violated on interfaces
- States or conditions that should never be reached
One needs to be able to control the embedded assertions within the RTL
- Reception of a packet with a CRC error
- Reception of a malformed packet
- Enable/disable assertions (For better simulation performance)
Control the termination of the simulation after an assertion fires:
- Enable/disable ALL assertions
- Enable/disable assertions on a per-block basis (e.g., BLOCK_A, BLOCK_B, etc.)
Enable/disable console displays for fired assertions (to reduce clutter)
Put all other assertions inline with the RTL (This is not a mandatory requirement, but it helps document the code and the designer's intent.)
- Exit on firing of any one assertion
- Exit on firing of any one assertion N times
- Best captured when the RTL is being coded
- Accelerates debug of failed assertions
- The file name and line number is displayed on console
- The RTL and assertion can be viewed simultaneously
- Helps review code to realize where to add more assertions
The design coding should also be consistent. One should have naming rules and conventions in modules, ports and signals. Rules should exist for reset handling, hierarchy depth and block size limits.
The guidelines below summarize some recommended design coding rules.
- Completely list all files associated with a design in a single file, say .F, placed under the design directory (at the same level as RTL, shell, gate and so on). Design_top should match the top of the design hierarchy, which often is identical to the name of the design directory.
- Use cpp-style (C Preprocessor) conditionals (#if, #ifdef) to adapt filelists for the needs of different tools. This includes specifying gate level netlists (#ifdef GATE), simulation (#ifdef SIM), linting (#ifdef LINT), synthesis (#ifdef SYN), and formal verification (#ifdef FORMAL). By default, when no cpp variables have been defined, the resulting filelist should list the RTL files. (The typical user need only be concerned with GATE. The other defines are normally included automatically through reference to library filelists.)
- When using library cells (e.g. standard cells, RAMs, IP with behavioral models) design filelists should reference project-common library filelists. These pre-defined filelists ensure project-wide consistency and automatically address specific tool requirements through cpp conditionals. Although the location of these links may vary from project to project, they are typically located in something like frontend/common/lib_links.
- When a design includes library cells or other designs as sub-modules, never directly list the contents of their filelists in .F. Always sub-reference their .F files using the "-F" directive. The -F directive reads the contents of another filelist, but also prepends the path of that filelist to the contents.
- Always use relative paths within .F to make the design relocatable.