Many design project teams and research and development groups are working to significantly reduce system verification runtimes and perform system verification earlier in the design process. One solution successfully being used to address both issues at once is the creation of higher abstraction models of important system components, typically using regular C/C++ or enhanced or modified flavors of C, such as SystemC.
Once a library of models is available, a home-grown or commercial C or mixed simulation environment can be used to execute the C-based system design. Then, high-level hardware-software co-verification can be performed if an interface mechanism exists to run embedded host code on a processor core model in conjunction with the external C modeling environment. This, of course, also requires the availability of a model of the embedded processor that can run code and be instanced in the design.
With a mixed, C and HDL simulation environment, the high-level models can facilitate faster system verification runs at many stages in the design process for many purposes. This allows C models to become useful before everything in the design is modeled in C. Engineers are getting quite creative in how they use this kind of environment. This article describes some of the ways that C models are used, and discusses some of the concerns that need to be considered by any group wishing to adopt this kind of methodology.
System-level design exploration
A typical application of high-level model simulation is the verification of system performance against the targeted throughput speed, which might be dependent on, for example, memory configuration, bus topology, or hardware-software partitioning decisions. These high-level models can contain enough timing detail to give the designer a good estimate of the cycle counts for system processes split between hardware and software.
The system may use a DSP or a custom co-processor in conjunction with a microprocessor to analyze or transform data streams. The major design blocks -- for example, processors and memory -- can be represented by cycle-approximate models and basic memory models (simple arrays) with all data transfer done at the transaction level.
If assessing throughput is the main goal of a particular simulation, then the models at this stage do not need to correctly perform the required processing. They can simply pass back data of the correct type and consume the right number of cycles. This can be refined to a more accurate model later in the design process.
Figure 1 -- Example of a simple system-level design view
Once the main architecture decisions have been made, this same high-speed simulation environment can be used as a firmware development platform.
Accelerating downstream verification runs
Once a library of high-level models exists, these models can be reused at the detailed simulation level to selectively replace the HDL design description within a system simulation, reducing the load on the logic simulator. This can be done using a tool such as Mentor Graphics' Seamless with C-Bridge, which allows C-models to be connected to the processor bus and mapped into the processor's address space. Seamless will route an access to those address ranges to the C-Bridge model instead of to the logic simulator.
The C-models are used in place of the HDL for design blocks that are not the focus of the current verification run. They can also be used in place of components that have already been proven on previous projects. These C elements can include abstract or partial functionality, modeling only the functions necessary to maintain the overall system integrity.
For example, simple behavioral models that supply test data to the internal circuitry, perhaps from a file, can replace well-tested I/O port interface logic. Assuming the HDL design database is constructed to facilitate this, pre-defined circuit configurations can be created for the different verification tasks. This stripped down database can also be released to software developers to provide a faster simulation environment for firmware development, or to verify the booting of an RTOS.
Common challenges in C-based design
A growing number of embedded system developers are either implementing or investigating these methodology changes. A shift of this significance should not be expected to be trouble-free. Some of the typical problems to be faced by these pioneers are:
- Modularity of the model creation process. In other words, can the design hardware models be built piece by piece, possibly in different groups. Not all of the blocks need to be modeled this way if targeting a mixed C and HDL environment. Also, being able to phase-in the availability of selected hardware elements could limit the initial overhead and lower the risks involved.
- Modularity of the compile and link process. There's evidence that this can be a significantly time-consuming process in some environments, especially within a tight verify->modify->re-verify loop.
- Model creation resource and expertise.
- Maintenance of the simulation environment. The impact of this depends on whether an in-house or commercial environment is used. The normal make-versus-buy decision must be made here. The availability of quality commercial solutions is critical.
- Maintenance of the models as the design progresses. Keeping the system-level models in synch with the detailed RTL is important if they are to be reused at the detailed verification level and if they are to be used by software development teams. This also helps with reuse between product generations.
- Availability of models from third-party IP suppliers.
- Interfacing models from different suppliers (both internal and external), at different abstraction levels (for example, signal versus transaction), or in different languages.
- Accuracy concerns. Those who are used to working at the RTL level and below are not concerned about making major design decisions based upon less detailed hardware models. This also comes into play when reusing the abstract models in a mixed environment.
- Links to actual implementation. If the usefulness of the abstract models begins and ends with the high-level system simulations, then the ROI of the modeling work starts to look less attractive for some applications.
Using pre-defined interfaces (such as an efficient API to connect to the processor buses) and a signaling methodology for inter-block communication (in other words, something more robust than using global variables) can address the modularity concerns. Care must be taken if using a simple method, such as global variables, especially if multiple instances of a single model are included.
A better method is to create a channel-based interface, such as the one used by the SystemC language. In this case, the communication channel is created independently of the functional models, thus avoiding individual design blocks that require knowledge of each other's interface. This is somewhat analogous to the separation between library cells and nets in schematic design (for those who can still remember how that worked).
The channel is defined as an object containing a data type and the methods defined to access that data. For example, it could be as simple as a 32-bit word with read and write functions.
Figure 2 -- Channel-based connection modeling
For compile and link times, using a tool or environment that requires every model to be linked into one executable before simulating would be cumbersome. Dynamically linking each model individually provides a much quicker turnaround for individual model changes, usually reducing this step to minutes instead of hours.
The resource issue is a primary concern to many development managers and to over-taxed hardware engineers. There are learning curves involved, from both a technical and organizational point of view, in moving towards this type of system design methodology. The technical learning curve requires engineering to become more familiar with the C language and abstract modeling techniques.
The organizational learning curve requires modifying project structures to accommodate new tasks and re-order some traditional ones. It is also possible that a different project team structure may be necessary to avoid having resources pulled back into traditional roles by schedule pressures.
Additional project tasks include the modeling work and system level simulations. Schedule changes reflect more parallelism in software and hardware development as well as the initial system exploration stage. Figure 3 shows an example of how design teams may change the project structure.
Figure 3 -- Expected project structure differences
Project dependencies and task timeframes vary, depending on application and project team structure, but the overall intention of both increasing the opportunity for concurrency and reducing verification runtimes is the key point to make. In addition, the possibly extended system design time may yield a better, more competitive product, if the system exploration allows for less pessimistic design choices.
The good news is that the initial modeling burden involved with the shift to abstract system modeling and simulation is significantly reduced in subsequent projects. The amount of commonality from one product generation to the next allows for the reuse of much of the model code.
Some companies are looking to external resources to do the modeling job, and there are a number of options for that avenue also -- from the design services groups of the EDA players in this space to independent modeling shops. In summary, the resource issue remains a primary reason for many to stay with the traditional "from spec straight to HDL" design process, so there will undoubtedly be the usual combination of early-adopters, early and late majority, and laggards.
If a design contains a high percentage of external IP, then engineers are more dependent on the IP supplier to start delivering the abstract C models. Some of the leading vendors in that space are already moving in that direction.
Hopefully that trend will continue. Interfacing between all of the potentially diverse model styles is then going to be the major task faced by the end-user. Using an environment that is flexible in terms of C-language flavor and interface level can alleviate the bulk of the pain here.
The question as to whether seasoned hardware engineers trust a C-model to be a good representation of the design intent has come up often in the author's discussions with design teams, especially if that model is externally sourced. Alternatively, the concern might be that the C descriptions and the HDL descriptions do not match or will diverge over time, especially if the two simulation environments are very different.
Having an environment that allows the C-models to be run in the same simulation tool with the same testbench certainly helps alleviate these fears and bridge the gap between abstract system modeling and real design implementation. Whatever solution is chosen, some thought needs to go into a verification strategy that ensures the consistency of the different abstraction models, just as is done today between the RTL and gate level.
In some companies, C-based hardware modeling has been used at the system level for a while, typically within in-house, custom environments. With the arrival of new tools, languages, and methodologies, the potential benefits of these models are being extended, and the barriers to implementation are being reduced -- making abstract C modeling more interesting to a broader set of design groups.
Mike Andrews is a Technical Marketing Engineer for the SoC Verification Division at Mentor Graphics, specializing in 'C' based design. He received a Bachelor of Science Degree in Electronics and Applied Physics from the University of Durham in the UK in 1989. Since then, Mike has worked in the related areas of ASIC design, cell modeling and ASIC/SoC EDA tools and has contributed to a number of modeling related standards groups.