Development managers and architects for today's system-on-a-chip (SoC) designs can choose from a wide range of development tools and methodologies in order to deliver results in the hardware/software co-development space. Existing tools are not good at allowing hardware and software expertise to be mixed. New languages and new methods can improve this, but their introduction is expensive and if they involve coding in a new language, then a development manager has to gamble on whether or not the language will become a standard, or another dud.
As a tool vendor that focuses on hardware/software co-development, Tenison EDA's approach, based on many years of practical hardware/software interworking experience, can be summarized as:
- Do not force takeup of new languages and class libraries, or even new methololgies for HDL coding. Working with mixed-languages is a fact of life, as is IP import and reuse in existing languages. Change will come, but slowly. For today, the important thing is to focus on helping the most commonly encountered languages, C and Verilog for example, be productive together, rather than trying to replace them with a single panacea.
- Do not force takeup of a new hardware design flow. Choices in this area are difficult enough!
- Current all-in-one hardware/software development tools tend to work well at the beginning of the project during architectural modeling, but then fizzle out until real silicon arrives, and joint bringup and test can occur. It would be far better to get good interworking during the entire development process by providing open, flexible tools, since SoC teams prefer to use "best-in-class" tools from multiple suppliers instead of tools from a single vendor.
Figure 1: This chart represents the current hardware/software co-development problem (The Software Modeling Continuity Gap). Delivering a real SoC requires solutions in many problem areas, but delivery of working, effective software is easily forgotten, or left until too late.
A modern SoC development overcomes many technological hurdles using a multi-disciplinary team. Very often, management attention is focused on tapeout and working silicon, but in practice a return on investment is only provided when both hardware and software are working effectively together in the field. This means that early software involvement, from the design and architecture phase all the way through to final field trial, can lead to a real reduction in time to market.
Figure 1 summarizes this situation. Silicon verification, layout, and timing closure, along with working silicon prototypes, are huge challenges. However, very often these tasks are viewed by the core-development team as the endgame, rather than simply an intermediate step.
Software modeling and prototyping are essential in order to overcome this problem. Without software modeling, delays in tapeout and prototype availability can lead directly to delays in the product. With good software modeling, the development of software and hardware can happen in parallel so that when the prototypes arrive, the software is ready.
In addition to this, software modeling is imporant for performance design. Without it, the resulting silicon could be too slow (in terms of delivered throughput) or too fast (in other words, more expensive than it needs to be). In general, such decisions should be made during the architectural phase, before any RTL coding. In practical situations it may not be possible to be completely accurate in the architectural phase, for instance if the architectural modeling is not cycle accurate, or involves other forms of performance estimation. In such cases, the actual delivered throughput of the system might only be determined when silicon prototypes are available.
Figure 2: Software is part of the SoC development process. Designers need a system-modeling strategy that permits continuous software team involvement in the overall SoC project.
To overcome this gap in software activity between architectural work and driver work, a software modeling strategy must be employed that allows software activity to continue from architectural work, allowing system development work to occur before final hardware is available (Figure 2). The exact strategy decision may depend on the detailed requirements of the SoC itself, or on other aspects of the SoC project.
We believe that for many systems the best strategy for this system modeling is to do the work in C, drawing on architectural models and also on RTL-derived models. The advantages of this approach are:
- As the ultimate output of this work is driver software, the activity is likely to be used by the software team rather than the hardware team
- Using C means that the software team is already more familiar with the tools, languages, and concepts involved
- The freedom to use RTL-derived models can include third-party IP, or legacy IP, or other elements of the system which for any reason have not been represented within the architectural models
- System test by software engineers requires good overall system performance, if necessary at the expense of HDL accuracy.
This final requirement can push the project in the direction of hardware emulation, or FPGA prototypes. These provide the ultimate in performance but with a high cost in terms of flexibility, early availability, debug visibility, and dollar cost per seat.
What Are The Alternatives?
There are many choices for software modeling in a SoC project, both for methodology and specific tool use. This section contrasts some of these approaches, and also points out cases where their use may be complimentary.
- A 'C' model from the architectural phase is all you need
This works excellently if architecture and requirements do not change during the life of the project. This is often appropriate for a single-function chip, or a single silicon-IP block such as a microprocessor, but it does not scale well to larger integrated projects. This methodology can be burdensome when legacy and third party IP are involved.
- Build FPGA prototypes or buy a hardware emulator for use by the software group
Many current projects de-emphasize detailed modeling in the system-architecture phase, but wait until late in the RTL coding process to present complete FPGA prototypes to the software group. This process can work well for small projects but does not scale well for large SoCs. The result tends to be late, rigid, and requires significant hardware resources. It can be hard to debug or profile with the result, such as in obtaining signal tracing. Also, the limited number of seats (due to dollar cost) tends to restrict the result to only 'core' software group members, which can place subtle limits on early-developed driver software.
- Use co-development environment products
Currently available products tend to work well if the whole chip development methodology and coding style is centered around that product, but do not work well in projects with a significant degree of legacy or third-party IP, or with tricky requirements such as multiple CPUs or complex bus structures.
- Use software simulator products in the software group
Very few software teams use this route to produce driver software. The result tends to be too slow, too expensive per seat for larger teams, and requires a huge learning curve for software engineers.
What About Large Organizations?
Follow-on projects with existing IP, parallel projects from different groups, and related projects on separate sites can make it hard to apply a single controlling methodology to all modeling. In such cases, more than one method is often in use, or transitions occur over time while carrying IP forward. In these cases, the option to create architectural models from implementations can be a valuable technique.
How Will These Practices Evolve?
In practice, most hardware design groups today allow a significant gap to exist between architectural modeling and the development of driver software. Larger chips and more complex software interactions will increase the opportunity cost that this incurrs. Each of the previously described approaches will evolve over the next decade.
What Does The Future Hold?
How will hardware/software interaction evolve over the next decade? Perhaps we're talking about a problem that's going to disappear over time? There is much discussion, research, and product activity at the moment in improved tools for the architecture phase of co-development projects, with SystemC being widely discussed as a valuable integration language.
The exact flow from SystemC descriptions through to silicon and driver implementations, however, is still the subject of intense debate. There is little doubt though that this area will evolve considerably in the next few years. The silicon budget to allow more, faster, and more specialized microprocessors is becoming available. This allows new problem areas to be tacked on a single chip, provided that software and hardware can pull together. The silicon budget to allow more, faster, and wider on-chip memories will further enhance the throughput of microprocessors, reducing whole-solution costs, provided that memory budgets can be accurately agreed between hardware and software teams. Increasing NREs for submicron devices will increase pressure for multi-function devices, spreading chip NREs over multiple application areas, provided that software can make the device sufficiently flexible.
None of these trends reduce the pressure to get hardware and software teams working closely together. This close inter-working should be carried throughout the entire system-development cycle. Building a culture of close inter-working between hardware and software will make the difference between success and failure in SoC developments.
Founded in 2000, Tenison EDA
is a spin-out from the University of Cambridge Computer Laboratory, dedicated to becoming the leading supplier of co-development tool technology. The company's first product, VTOC, is a Verilog-to-C translator optimized for the hardware/software co-design issues of SoC design. VTOC has been under development for more than five years, including two years of rigorous beta testing.