Lewis Sternberg thinks about why FPGA developers are slow to adopt ASIC verification practices and what may be changing...
Back in the day -- way back in the days when the big 3 EDA vendors were DMV (Daisy, Mentor, Valid). In that halcyon time it seems that managers were buying product based on sales engineers' promise of a better tomorrow. (I was new to the industry, so I'll leave it to those even longer in the tooth to correct the record.) Editorís note: no correction necessary.
Needless to say, that is not the world we live in now. It seems to me that for quite some time, the only thing that can persuade companies to buy tools is pain. Typically, the pain of the previous design-cycle. In the high-stakes world of ASICs, where a typical project runs $5M - $50M, fear can sometimes be used as a motivator. In the world of FPGAs, however, it seems that pain reigns (as a motivating factor).
Being a DV guy, I don't get to see much FPGA action. While an ASIC re-spin can cost a company millions in expenses and even more in lost market, re-burning an FPGA is no big deal.
Traditionally, the pain-differential between ASICs and FPGAs is considerable. Not that FPGAs are "a walk in a sunny meadow". Getting performance out of them, getting the features to fit into the allotted package, and more can make life on the FPGA frontier challenging.
Add something else to those challenges:
In the last few years, FPGAs have crossed a threshold. Consider two curves: one linear -- the number of pins on a die; the other one quadratic -- the number of devices in the die. For a small enough device it is possible to see through the pins to the internal design. Lab debug is do-able. As the FPGA gets larger, lab debug becomes untenable -- you just can't see enough of the design through the pins.
... and there's the wall of pain. The pain of a slipping schedule with no idea when it will come back into line. The pain of late nights (and weekends) away from loved ones. Pain in the you know where.
Now as a DV contractor, the solution to this pain is obvious -- follow the lead of ASICs and create an ASIC-like verification environment. One advantage of coverage-driven verification is that it's a huge boon to scheduling. If the testbench is well designed, then you can be fairly sure there aren't bugs in the places that were covered -- you know what the coverage is and how it's proceeding. It's a closed-loop system --- in contrast to open loop systems such as 1) we'll sort it out in the lab and 2) we'll count the bugs. (The latter becomes open-loop when one hits the group's bug throughput.)
But then, I'm just one guy with a very narrow window into the world of very large FPGA designs.
One who, fortunately, has not been up against that particular wall of pain. Have you been through that battle? How far did you get? What have you learned?
Find out more about Lewis from his LinkedIn page.
If you found this article to be of interest, visit EDA Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to all aspects of Electronic Design Automation (EDA).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for the EDA Designline weekly newsletter Ė just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you [grin]).