Part two of the two-part essay co-authored by Paul McLellan and Jim Hogan about evolving design methodology and how it will change the industry.
Editor's note: This is the second of a two part opinion piece authored by EDA luminaries Jim Hogan and Paul McLellan. The first installment was posted Nov. 24.
Unlike previous changes to the abstraction level of design, the block level not only goes down into the implementation flow, but also goes up into the software development flow. Software and chip-design must be verified against each other. Since the purpose of the chip is to run the software load, it can't really be optimized any other way.
There is, today, no fully-automated flow from the block level all the way into implementation. A typical chip will involve blocks of synthesizable IP typically in Verilog, VHDL or SystemVerilog along with appropriate scripts to create efficient implementations. Other blocks are designed at a higher level, or, perhaps pulled from the software for more efficient implementation. These blocks are in C, C++ or SystemC. The key technology here is high-level synthesis (HLS). This provides the capability to reduce system behavioral models to SoC almost automatically.
Designs like this are really very difficult to verify efficiently due to the inevitable mixture of languages and accuracy. Large FPGAs are the medium of choice: they can accept this mixture and they are fast enough to run a large verification load. FPGAs have another advantage in that they introduce no silicon variance. They are by definition already silicon proven.
Going up from the block level allows a virtual platform to be created. The big challenge here is transitioning enough blocks so that fast hardware models exist with fidelity, for otherwise the delay and effort to do the modeling makes the software development schedule unacceptable.
Virtual platforms, and some other hardware-based approaches such as emulation, straddle a performance chasm. Software developers require performance millions of times faster than is appropriate for chip design. Of course at some level, if the technology were available, everyone would like high accuracy and high performance. We would all use Spice all the time if it ran faster than RTL but it is impossible to do that. Instead, performance is purchased by throwing away accuracy.
However, it is still necessary to be able to move up and down this stack dynamically: boot Linux at high performance (seconds not hours), and then drop to a higher level of accuracy to run a couple of frames to a display processor to check the hardware functions correctly. Run fast until just before a bug seems to occur, then drop down and investigate what is really going on. High performance or high accuracy is not good enough, both are required: the software performance model doesn't have enough accuracy to debug the system hardware and the slower models can only boot Linux on a geological timescale.
This approach—the block level IP integration with virtual platforms—considerably shortens the number of steps between simply expressing design intent and actually having working hardware and software. This change enables design creation once again to move to the electronic system company, where the most important knowledge—the system knowledge—is found. Implementation is then either directly in FPGAs for many systems that are relatively low volume and high software value, or, once FPGAs have been used for prototyping, transformed into silicon and manufactured in one of the big foundries, largely bypassing the previous generation of semiconductor companies focused on trying to produce a one-size-fits-all standard product.
There are several technologies that seem to now be mature enough to enable this transition to software-centric block-level design, along with companies supplying them.