One of the largest confabs in the still-maturing semiconductor IP industry happens next week. The IP-SoC 2010 event in Grenoble, France, has been a long-standing meeting point (19 years and counting) for those in and around the IP business. The strength of the technical sessions and the level of attendance is a tribute to the folks at Design & Reuse who started the event many years ago. While the IP industry has had its fits and starts, D&R has persevered and, with the backing of EE Times/UBM, appears to have validated the need for the type of service they offer (in addition to putting on the IP-SoC event). And, if imitation is the sincerest form of flattery, then Cadence, through its Chip Estimate offering, has reinforced that validation of the market need for IP infrastructure such as listing and evaluation services.
It’s interesting to note that this year’s IP-SoC program includes a few FPGA-specific topics. This is a relatively new phenomenon as the use of IP expands beyond its traditional stronghold in hardwired ASICs to the ever more capable world of FPGAs. Indeed, at the leading edge of FPGA design, it’s almost unthinkable not to include some kind of pre-designed block or core. These days, it is conventional wisdom that using IP blocks is a prudent approach to quickly developing complex, highly integrated system-on-chips, whether they are ASICs (in which case, FPGAs still often come into play as prototyping platforms) or FPGAs.
Thanks to the progress of companies like Xilinx and Altera, FPGAs today have essentially become SoC platforms. Design teams can easily – in terms of circuit density – combine a processor, peripheral functions, and memory on an FPGA. Embedded design teams have a plethora of choices when it comes to IP and processor cores for SoC designs based on state-of-the-art FPGAs, from relatively complex blocks such as the soft processor cores, memory controllers, and DMA controllers to a range of I/O functions starting with simple UARTs and extending to complex blocks such as PCI controllers.
It all sounds great on paper... until you try to verify it The concept of quickly plugging in an IP block for a non-differentiating piece of functionality sounds great – engineers can concentrate on what will really set the design apart. But ‘plug-and-play’ rarely works as easily as it sounds.
The challenge comes in the debug and the verification process especially as a team mixes processor cores, other purchased IP, and their own circuit blocks.
And, the problems are intensified in FPGA design. That’s because the third-party IP models used for software simulation are often different to the corresponding models used by the FPGA's place-and-route software. The reason is that an IP vendor often supplies two different models: one at a high-level of abstraction and one at the gate-level. The problem is that these two models are different. Often, the high-level simulation model contains behavioral (non-synthesizable) constructs. These constructs make the software simulations run faster, but they prevent the models from being synthesized. Even worse, there are often subtle differences between the behavioral and gate-level representations, and these differences only manifest themselves when the FPGA design is deployed in its target system. Because you are not verifying what you are building, you find yourself in the lab with a non-functioning design. The result is what can seem like an endless loop through synthesis and place-and-route to find errors you were certain didn’t exist after simulation.
The solution is simple. Tweak the RTL or modify synthesis parameters – pragmas – to map the old block of IP to the target FPGA in a way that works. The problem is knowing what to tweak. If all you have is RTL simulation results to go by, the make it work process degenerates to a guess, long synthesis and place and route cycle and lengthy bring up tests in the lab.
What’s needed is a way for the simulation of the IP block to show how the IP or old design actually works in the target FPGA, isolating the issues that indicate how to direct the synthesis process or tweak the design.
An approach gaining in popularity takes a page from the ASIC verification playbook and uses an emulator-like method. FPGA prototype boards are not the answer because the design engineer has to do significant work to get their design working on it. They have significant limitations because the prototype hardware is the constrained item, and the design (or a portion of the design) has to be modified to conform to it. What is required is a solution where the design is the constrained item and the hardware conforms to it. So, instead of having to verify the IP in a foreign environment, this approach enables verification of the IP – in the hardware –in the context of how the user plans to use it. It’s like an emulator approach but with the native FPGA device the user plans to use for their project.
Our version is called Device Native verification, and it bridges the gap between the RTL/abstract-level software simulation domain and the physical FPGA. With this methodology, engineers place the design of any "known good" blocks (existing internally-developed IP and trusted third-party IP cores, for example) into the same FPGA that is being targeted for their design project. It allows these blocks to be verified in conjunction with the rest of the design running in a software simulator. The benefits are dramatic: Faster verification; results that match silicon 100%; and the ability to move blocks back and forth between RTL and FGPA without long compilations or scheduled lab time. This last benefit holds the solution to effective IP verification and design re-use for FPGAs: Verify the IP in the FPGA, in the simulator, before it negatively impacts the project schedule.
We’re as bullish as anyone on the use of IP in FPGAs and their continued emergence as viable SoC platforms. And it’s great to see FPGAs finally getting that kind of exposure at industry conferences on IP. But FPGA designers need to be aware of the unique nuances of using IP in these programmable platforms, and put in place tools and methodologies to overcome the IP use obstacles to success.
About the author: Dave Orecchio is President and CEO of GateRocket, Inc. He has 24 years of semiconductor industry with a focus on semiconductors, ASIC and FPGA design and development. Prior to GateRocket, Dave held executive positions at LTX, Viewlogic Systems, Synopsys, Innoveda, Parametric Technologies and DAFCA.
Sounds like a great approach to a difficult problem. I worked in the custom ASIC world for 18+ years and would agree that the use of IP is essential to FPGA growth (another part is the performance, gate counts, and feature improvements). There has always been a difference between the simulator model and real hardware (be it a hard macro or a soft macro) that only showed in the final devices whether custom ASICs or FPGAs. The big difference was with FPGAs you had the option to re-code/compile or reroute the device. With ASIC NREs in the 500K to 3million range, simulation, regression testing, emulation were essential and in some cases worked well. There is nothing like tested and proven IP implementations to ease use and increase the probability of success.
Hi Sharps_eng, Thank you for posting your comment and the insight you offer. I like your analogy to the impact that microprocessor emulators had on the market, we believe that our solution could do the same or more. Now to answer your questions: 1) Price is approximately the same as the software simulator that we integrate with (Cadence, Mentor and Synopsys) but price varies based on the FPGA we place in the RocketDrive. Our lowest price configuration is $25K. 2) No one has developed a solution like this one. Others have connected hardware to simulators but all of them require you to change your source code to use the hardware. Our CTO and founder Chris Schalick innovated this patent pending solution in a way that it fits seamlessly into your simulation environment and regression environment. An un-moveable requirement as far as I see it. Our customers, like Qualcomm routinely place designs in the RocketDrive untouched by human hands. 3) Simulators have very good hooks into simulators, we have partnerships with Cadence, Mentor and Synopsys and use their tool to test our integration. Our integration with their simulators is so tight that when a user uses our product, they control it completely from the simulator - their native working environment. So I would not call it a hack because of the support from the vendors and our rigorous testing of it. 4) At the beginning of the year, we added a "Soft Patch" capability to RocketVision which enables the user to move blocks in and out of the RocketDrive without rebuilding the FPGA. This innovation cuts the long loops of debug when trying to use FPGAs in system or for ASIC prototypes. It is a very cool and important capability. 5) The interface is our own proprietary high speed interface which is important for any direct integration to a simulator.
Again, thank you for your comment and if you are located in the US, have a happy Thanksgiving.
Gaterocket's website explains their product very well. Next questions: 'How much?' and 'Who else does this kind of stuff?'.
They are addressing the inefficient-design-process issues that SoC teams have sleepwalked into, and if it works as neatly as they say, a GateRocket would become an essential tool. Reminds me of the impact made by the first microprocessor emulators to break onto the scene.
One other question: I didn't know the RTL simulation software had plug-in hooks that would allow RocketView to map external hardware i/o stimuli into the purely-software simulation application. Anyway, it's a neat hack, as long as it remains supported by the tool vendors. Lastly, what computer I/O does it use? SATA?
Thanks for this Dave -- I'm surprised it's taken so long to get FPGA-specific IP topics in the IP-SoC conference ... but it doesn't surprise me that they are there now, and I think that the floodgates will start to open. I also think that there's a good market for small start-ups to develop IP for FPGAs, but as you say the trick is to get everything to work together. Max
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.