With the IP-SoC 2010 event next week in Grenoble, we see FPGAs finally getting some kind of exposure. But, Dave Orecchio of GateRocket indicates that FPGA designers need to be aware of the unique nuances of using IP in these programmable platforms, and put in place tools and methodologies to overcome the IP use obstacles to success.
One of the largest confabs in the still-maturing semiconductor IP industry happens next week. The IP-SoC 2010 event in Grenoble, France, has been a long-standing meeting point (19 years and counting) for those in and around the IP business. The strength of the technical sessions and the level of attendance is a tribute to the folks at Design & Reuse who started the event many years ago. While the IP industry has had its fits and starts, D&R has persevered and, with the backing of EE Times/UBM, appears to have validated the need for the type of service they offer (in addition to putting on the IP-SoC event). And, if imitation is the sincerest form of flattery, then Cadence, through its Chip Estimate offering, has reinforced that validation of the market need for IP infrastructure such as listing and evaluation services.
It’s interesting to note that this year’s IP-SoC program includes a few FPGA-specific topics. This is a relatively new phenomenon as the use of IP expands beyond its traditional stronghold in hardwired ASICs to the ever more capable world of FPGAs. Indeed, at the leading edge of FPGA design, it’s almost unthinkable not to include some kind of pre-designed block or core. These days, it is conventional wisdom that using IP blocks is a prudent approach to quickly developing complex, highly integrated system-on-chips, whether they are ASICs (in which case, FPGAs still often come into play as prototyping platforms) or FPGAs.
Thanks to the progress of companies like Xilinx and Altera, FPGAs today have essentially become SoC platforms. Design teams can easily – in terms of circuit density – combine a processor, peripheral functions, and memory on an FPGA. Embedded design teams have a plethora of choices when it comes to IP and processor cores for SoC designs based on state-of-the-art FPGAs, from relatively complex blocks such as the soft processor cores, memory controllers, and DMA controllers to a range of I/O functions starting with simple UARTs and extending to complex blocks such as PCI controllers.
It all sounds great on paper... until you try to verify it
The concept of quickly plugging in an IP block for a non-differentiating piece of functionality sounds great – engineers can concentrate on what will really set the design apart. But ‘plug-and-play’ rarely works as easily as it sounds.
The challenge comes in the debug and the verification process especially as a team mixes processor cores, other purchased IP, and their own circuit blocks.
And, the problems are intensified in FPGA design. That’s because the third-party IP models used for software simulation are often different to the corresponding models used by the FPGA's place-and-route software. The reason is that an IP vendor often supplies two different models: one at a high-level of abstraction and one at the gate-level. The problem is that these two models are different. Often, the high-level simulation model contains behavioral (non-synthesizable) constructs. These constructs make the software simulations run faster, but they prevent the models from being synthesized. Even worse, there are often subtle differences between the behavioral and gate-level representations, and these differences only manifest themselves when the FPGA design is deployed in its target system. Because you are not verifying what you are building, you find yourself in the lab with a non-functioning design. The result is what can seem like an endless loop through synthesis and place-and-route to find errors you were certain didn’t exist after simulation.
The solution is simple. Tweak the RTL or modify synthesis parameters – pragmas – to map the old block of IP to the target FPGA in a way that works. The problem is knowing what to tweak. If all you have is RTL simulation results to go by, the make it work process degenerates to a guess, long synthesis and place and route cycle and lengthy bring up tests in the lab.
What’s needed is a way for the simulation of the IP block to show how the IP or old design actually works in the target FPGA, isolating the issues that indicate how to direct the synthesis process or tweak the design.
An approach gaining in popularity takes a page from the ASIC verification playbook and uses an emulator-like method. FPGA prototype boards are not the answer because the design engineer has to do significant work to get their design working on it. They have significant limitations because the prototype hardware is the constrained item, and the design (or a portion of the design) has to be modified to conform to it. What is required is a solution where the design is the constrained item and the hardware conforms to it. So, instead of having to verify the IP in a foreign environment, this approach enables verification of the IP – in the hardware –in the context of how the user plans to use it. It’s like an emulator approach but with the native FPGA device the user plans to use for their project.
Our version is called Device Native verification, and it bridges the gap between the RTL/abstract-level software simulation domain and the physical FPGA. With this methodology, engineers place the design of any "known good" blocks (existing internally-developed IP and trusted third-party IP cores, for example) into the same FPGA that is being targeted for their design project. It allows these blocks to be verified in conjunction with the rest of the design running in a software simulator. The benefits are dramatic: Faster verification; results that match silicon 100%; and the ability to move blocks back and forth between RTL and FGPA without long compilations or scheduled lab time. This last benefit holds the solution to effective IP verification and design re-use for FPGAs: Verify the IP in the FPGA, in the simulator, before it negatively impacts the project schedule.
We’re as bullish as anyone on the use of IP in FPGAs and their continued emergence as viable SoC platforms. And it’s great to see FPGAs finally getting that kind of exposure at industry conferences on IP. But FPGA designers need to be aware of the unique nuances of using IP in these programmable platforms, and put in place tools and methodologies to overcome the IP use obstacles to success.
About the author:
Dave Orecchio is President and CEO of GateRocket, Inc. He has 24 years of semiconductor industry with a focus on semiconductors, ASIC and FPGA design and development. Prior to GateRocket, Dave held executive positions at LTX, Viewlogic Systems, Synopsys, Innoveda, Parametric Technologies and DAFCA.