SoC verification overwhelms conventional logic simulation, yet the three widely-used hardware alternatives for accelerating SoC verification come with reliability issues and involve uncomfortable trade-offs; most importantly, who has access to the acceleration hardware and when, and how stable is the process?
Currently, the three hardware options generally used are FPGA prototyping, accelerated simulation using verification IP, and in-circuit emulation (ICE). These suit some scenarios but have significant downsides for teams working on a continually evolving multi-processor, multi-protocol, software-heavy SoC.
FPGA prototyping is fine for software spins on unmodified, pre-existing hardware, but it is not suited for hardware that is undergoing significant development. This is because it offers no concurrency in the design process: FPGA boards arenít usable by the software team until the hardware is finished, and every time thereís a problem the recompile times could take days rather than hours. Plus, trying to split a multimillion gate SoC across multiple FPGAs is asking for trouble. Thus, even though FPGA prototyping looks like a cheap option, itís a long and risky process for an SoC with hardware debug still in progress ó which is why it is now a static market while the emulation market is growing at 20% a year.
Accelerated simulation ó using deep test sequences, code coverage, assertion-based verification, error handling, protocol checking, etc. ó is suitable for hardware developers who need to verify that there are no bugs in a newly developed ASIC or piece of fundamental IP. But when they want to verify an SoC in which multiple existing hardware blocks must work together and be driven by software, they need a verification method accessible to the whole software team. Embedded processors require test to become a software-inclusive process.
Until 2012, in-circuit emulation (ICE) was the only option for concurrent software-hardware verification where the hardware is subject to change on a regular basis before prototype hardware is available. ICE suits a relatively small project team based at a single site where that team has total, personal control over the lab and when the design does not involve many different hardware target interfaces. But it has serious drawbacks in terms of flexibility, reliability, and ROI because it requires models of the target peripheral or host to be running on external hardware cabled to the emulator in the lab.
Cables and external hardware devices introduce complexity and potential causes of unreliability, and they also restrict emulation access; lab set up becomes the bottleneck. The only way for ICE to offer emulation access to multiple remote teams of hardware and software engineers on multiple projects simultaneously would be to run the same project setup on multiple emulators. This would be expensive, power hungry, and complex to manage.
Fortunately, there is now a fourth option available that uses a ďvirtual labĒ emulation environment. Virtual labs are better suited to the large team, multi-disciplinary, rolling integration challenges common to complex SoC development.
This new approach came about when teams working on SoC designs began asking if there was a way to make the emulation lab virtual so that they could give flexible emulator access to all of their software engineers without lab cable spaghetti and at a lower cost than trying to run multiple ICE setups as if it were RAID. It was clear that the solution would need more of the verification process to occur in software.
Transferring emulation from the lab to datacenter delivers more productivity, flexibility, and reliability.