As technology advances in deep-submicron designs, the verification process becomes a much more crucial component in the design flow.
With our designs in the millions of gates, the greatest verification challenges we have faced include verifying a complex system-on-chip (SoC) with a variety of external interfaces; hardware/software co-verification bottlenecks; and testing and verifying complete code coverage of the design. The chips continue to grow more complex with every generation, and this makes verification more challenging with every new chip.
Another verification challenge is the need to simulate and test all possible corner cases of a design, which can be difficult and time consuming. As our designs grow in size and complexity, and as we add increased functionality in smaller packages, we find an exponential increase in the tests we must do just to maintain the level of quality we have strived for in all our previous designs. Using our previous verification tools, we saw that it would not be possible to conduct the requisite number of tests and still meet our time-to-market deadline.
Our latest chip, the MoNET S1000, is the first in a family of products for the Voice-over-IP market that is designed to integrate voice and packet processing onto a single device. It represents a good example of today's verification challenges.
For instance, consider the amount of verification that had to be completed on this chip with more than 3 million gates, more than 500 kbytes of RAM and multiple clock domains. Our team needed to verify not only the internal blocks of our design, but also the interaction of the external interfaces. We verified traffic through several interfaces, such as PCI, ATM, Ethernet, TDM bus, JTAG, etc. In addition, the software components had to be confirmed. The operating system and the debugging utilities all needed to be verified to provide the confidence we needed to tape out.
Looking at options
When we began designing the MoNET S1000, we wanted to look at a more comprehensive verification solution. We examined various solutions, including traditional emulation boxes, rapid prototyping systems, and even semicustom development systems. Time constraints on the MoNET S1000 development cycle, however, made many of the options unworkable.
We chose Axis Systems Inc. (Sunnyvale, Calif.) because the ease of bring-up and acceleration capability allowed us to incorporate its system earlier into our design verification process. The decision was based on three key points: early system development with debugging utilities; ease of setup and short learning curve; and the ability to perform simulation, acceleration and emulation seamlessly on one platform. Also, the Axis environment was very familiar to us, which shortened the learning curve for our engineers.
We used Axis' Xcite system for acceleration and the Xtreme system for acceleration and emulation. These tools were used throughout the verification cycle. In fact, they became a crucial factor during the final verification phase, when we verified pre-layout and post-routed designs of the SoC interfacing external hardware. Also, ongoing full-regression suites were being run and verified with the acceleration systems.
The four phases of our verification process were broken down as follows:
Simulation with a third-party tool;
Acceleration with Axis Xcite running regression tests;
Emulation with Axis Xtreme running tests with external interfaces;
In-circuit emulation with Axis Xtreme running data flow tests and software programs.
In the first phase, we put together a verification environment in which our entire system was being simulated. The design was in RTL and the rest of the environment was a mixture of RTL, behavioral and even high- level language. Running complete system-level simulation on the traditional tools resulted in performance levels less than five cycles per second, which would not allow us to test the chip thoroughly. At this point, acceleration became critical and so we integrated the Axis tools into our verification process.
During the second phase, the Axis system accelerated our simulation performance and improved our productivity while the design was still in RTL, including some behavioral blocks. In less than two weeks, the chip was up and running in acceleration mode.
During the regression runs, we could hook up several testbenches to one accelerated database, which saved substantial compile time since only the tests were changing, not the design. Each verification engineer was developing his own test and compiling it for the accelerator. Transition from the software simulator to the accelerator was totally transparent for the designers, who were still writing code as before, with no language restrictions, and still debugging with 100 percent visibility. We were able to perform many more simulations because we were running so much faster.
When verifying a complex SoC, testing the communication between different interfaces is critical.
In the third phase, we migrated to emulation and developed different software application tests to verify SoC routing data between different target devices. Our team mapped the entire testbench in RTL so it could fit entirely in the accelerator or the emulator later on in the process.
Parallel to this work, we developed a hardware environment to match our testbench. Unlike traditional verification tools, which would make us wait until all our hardware interfaces are ready, we brought up our interfaces one by one, by combining simulation, acceleration and emulation modes.
Typically, the chip has a PCI interface, an Ethernet port, an ATM port, a JTAG and EJTAG port, a TDM port (H.100) and also a UART port. Instead of verifying them all at once, we started by removing from our testbench the bus functional model that generated stimuli for the PCI interface, and instead connected a real PCI bus to the design.
We started with PCI because Axis includes a board that can interface the emulated design with a real, full-speed PCI bus in a PC. In this mode, all the other interfaces were still being exercised by the testbench but the PCI interface was being exercised by real data, sitting on a real PCI bus. The mixture of simulation and emulation in one single platform enabled us to bring up our interface very easily. We were still coupled to a Verilog simulator so we could dump waveforms for the design at any time and debug our hardware interfaces as if we were debugging our simulation. One by one, the interfaces in the testbench were being replaced by real interfaces from the hardware.
Shortly after PCI, the Ethernet and ATM interfaces followed. The unique capability of having blocks in the testbench and real interface at the same time on one comprehensive platform, using the same consistent database, gave RealChip a very high confidence in our emulation setup for full in-circuit emulation for software development, the fourth phase of our verification strategy.
Once we validated all the interfaces, the software team began development of their application-level software. We knew that the chip could communicate with every single interface, so now we needed to make sure all the interfaces could work in parallel, an accomplishment requiring full in-circuit emulation (ICE).
In emulation mode, the ease of use of the debug tools to isolate problems was essential. Not only were we running fast, we were also isolating a problem, dumping a waveform and quickly making decisions on the source of the problem.
The Axis system also enabled hardware/software co-verification, since Xtreme can run in acceleration or in emulation modes. As soon as one group was done with its test, the other group could come in and reconfigure the system as either an emulator or an accelerator.
The following describes the debugging strategy our engineers used with the Xtreme tool: First, the software teams generated the tests in the form of binary objects (application code) that they preloaded into the design's SDRAM. After a couple of initialization cycles running in simulation mode, we switched context from simulation to emulation, to run the software very fast.
Xtreme has a "testbench call-back" capability -- the ability for a design running ICE to go back to the Verilog simulator and execute tasks only supported in a traditional simulation world, such as displaying messages on the screen or writing/reading a file to the hard disk drive. Therefore, we could implement a sophisticated tracing mechanism that periodically sent messages back to the designer's screen. This enabled us to trace the execution of the application program while running ICE.
One of the most important considerations for hardware and software designers is to correlate events that take place in hardware with the software instructions. In a traditional debug environment, the designers must define which events are important to monitor, create complex triggering conditions, program internal or external logic analyzers and wait until the condition is met. With the tracing mechanism, we could display the simulation time at every testbench call-back. Such a log file would be similar to the following:
At time 123444354 ns CPU1 writes FFF1111 to SDRAM
At time 123444555 ns CPU2 reads from location 2545AA
At time 9993939933 ns Int_FLAG CPU3 = 1
At time 43211243123412ns Enet I/F received 50 valid Packets
At this point, having correlated hardware events with time, we could quickly isolate a region of interest in case a trace message appeared to be incorrect. In a traditional system, the only way to do this would be to rerun the whole emulation/simulation and stop it before the time is reached. Axis' VCD-on-demand feature in Xtreme enabled our designers to interrupt the current run; dump a VCD file around a particular time in the past; and continue on with the simulation without having to re-run it from time zero. VCD-on-demand enabled our designers to isolate windows of interest in a matter of seconds without limitations.
For example, we could start very long software tests overnight, come back the next morning and post-process the trace file; isolate areas of concerns; dump a VCD file around these times; and debug and fix the problem with only one emulation run in a familiar "simulation" environment. We didn't even have to sit at the emulator to check if the error condition happened or not. The familiar simulation environment was a positive factor for us, especially since we did not have to train a whole team of engineers to learn how to use the Axis tools.
Since RTL fixes were still happening during this phase, the software team would find a bug in the design and bring hardware engineers in to show them the problem. Since hardware engineers understand only waveform language, the VCD-on-demand feature would give them visibility into the whole design for the entire emulation run.
Incorporating the acceleration and emulation capabilities into our verification flow has greatly improved our simulation times. We can run many more simulations than ever would have been possible with the traditional simulation methods. We have seen a performance boost that is orders of magnitude better than our previous simulations.
In terms of cycles per second, we were able to achieve up to about 65,000 cycles a second. This major improvement in verification performance saved us about two months or more of critical design time on the MoNET S1000 project. In addition, the emulation capability has allowed us to target actual hardware, which has been a big confidence builder for us before we taped out. Obviously, no simulator could ever have enough power to interface directly with the actual hardware.
We followed a very aggressive verification schedule. We completed our ICE setup in a month. After the first two weeks, another two weeks were needed to bring up all the hardware interfaces and another two weeks to implement an efficient methodology and fix the last emulation hiccups. This schedule was very fast when compared to traditional tools. Because we are performing emulation at the RTL level--and debugging at the RTL level also --we were able to start system emulation as soon as the RTL was stable, therefore integrating emulation very early in the flow.
In about two months of verification, we validated our RTL, pre-layout gate-level netlists and post-layout netlists.
The Axis system allowed our engineers to easily move from their simulation environment and to the acceleration and emulation environments. The advantage of having all these tools in one seamless environment also allowed us to use them early on in the design process, offering a significant asset for RealChip in terms of reaching the market faster.
We emulated the complete SoC in Xtreme, interacting with external interfaces and running a data flow software program.
This operation verified major portions of our IP and gave our team the confidence to tape out the design. Gaining this high confidence during development, before tapeout, is important to our design process. In our industry, if a chip comes back DOA, it costs more than just time and resources to develop and respin another chip; often it means product delay, which may lead to loss of market share.
The ability to perform hardware/software co-verification enabled us to shorten our verification process, because our software engineers could start their verification process earlier instead of waiting for the prototype chip.