Given the critical importance of software as a differentiator in SoC-based products, doesn't the software team deserve the best tools available?
What's this article about?
This article scrutinizes the long-foretold ability for virtual platforms to be linked to emulation hardware in order to co-verify the software and hardware components of an SoC. We explore how this has evolved from a neat idea to become Hybrid Emulation -- a practical solution for today's SoC hardware and software teams.
Since the advent of transaction-level modeling, it has been possible to create a virtual platform of a CPU sub-system, which trades-off accuracy for speed in order to provide an early target to test software. Traditionally, the make-or-break of such virtual platforms was the availability of SystemC models for the various components; it would simply take too long to generate a trustworthy model for, say, a new co-processor, so the benefit of early software simulation was lost. The growth of SystemC model libraries for popular functions -- such as ARM's Fast Models -- has helped to fill those gaps, but what of the new functions unique to the new SoC? One proven solution is to implement such functions in an FPGA-based emulation platform such as Aldec's HES, and then link that into the virtual model via SCE-MI transaction-level interfaces.
This article will explore seven different use modes for such hybrid emulation platforms. As part of this, we'll use examples from Aldec Inc. to explain how the hybrid combination of virtual platform and FPGA hardware offers a "best-of-both-worlds" approach. We discuss certain practical constraints and also how success is largely a matter of time -- the timing accuracy required, the time taken to create the hybrid, and the time saved by doing so.
Newsflash: SoCs need working software
An old associate at a processor IP company (no, not ARM) once told me that if the only thing SoC teams did was to hook up the same kind of IP blocks around the same kind of bus as everybody else, then how would anybody have a differentiated product? It's as good question; how can we add value to our SoC if we all have pretty much the same platform? To paraphrase Bill Clinton, "It's the software, stupid!"
We're probably all tired of being told that software is the major headache in SoC development and verification. Yes, we know, but we still have to get it working because the SoC isn't much use without it. Given that software is already a major differentiator between successful SoC-based products, shouldn't project schedules and tool chains be more focused on the success of the software team?
As a long-time proponent for FPGA-based prototyping, it has been apparent to me for years that all that prototyping work (by hardware guys) has mostly been for the benefit of the software team. The prototype is replicated in order to create multiple physical targets upon which software and its integration with the hardware can be tested. While they do that, however, the software team will also stress the hardware in new ways; flushing out some of those remaining hardware bugs not previously found by verification. Everybody wins, but given the importance of software, doesn't that team deserve even more? How about giving them a platform that they can use on their own, without hardware, and much earlier in the project?
What do you mean, "software"?
Designers and commentators often use the term "software" freely, but without specifically defining what they mean (I even did it myself in my introduction, sorry). So, let's consider what we mean when we say "software."
An SoC, like most other embedded processing systems, will have different kinds of software, operating and inter-reacting at different levels of a software stack and probably across multiple processors. A simplified picture of a typical software stack is shown in Figure 1. The user space, including applications, appears at the top of the stack and rests upon lower layers, which are increasingly concerned with the OS kernel and the hardware as we reach the lower levels.
Figure 1: The relative importance of speed and performance at different levels of a software stack.
Choose any level in the stack. If we wish to develop software at that level, then we will need a representation of all the levels below upon which to run it. Ideally, that might be the real lower level software running in real system hardware, but when that is not available then a model must be created instead. How accurate does that model need to be? Well, it should have just enough accuracy at a given level in order to maintain the deception that the software at that level is running in the real system; any accuracy greater than that will just be a waste of simulation time.
Software at the lowest levels of the stack is aware of the SoC hardware and masks any hardware dependencies from the higher-level software, thereby allowing greater portability and re-use of programs in the user space. At Aldec, the software at these lower levels is called "Hardware-dependent Software," which is a pretty self-explanatory name. A model on which we might test aspects of the hardware-dependent software, such as a BSP or BIOS, will need high accuracy and may include a cycle-accurate model of the relevant hardware itself. This may be in an RTL simulator, an emulator, or an FPGA-based prototype, depending how fast we need to run.
Software at the highest levels of the stack, such as apps and other user-space programs, need the least accuracy, and can therefore run at the highest speeds. Welcome to the world of virtual platforms.
To Page 2 >