In the remainder of this column I will present my company's theory why this did not happen. Also, I shall discuss some potential implications for the future. We believe that the stagnation of FPGA growth is mostly due to the inefficiency of FPGA technology. Most FPGAs use SRAM as the programming or "switch" technology. Interconnects are the dominating resource in modern designs. Within an SRAM-based FPGA, the programming of interconnects is implemented by an SRAM cell that controls a pass-transistor driver or bidirectional driver. The following chart illustrates the diffusion area associated for such a Programmable Interconnect Cell (PIC) assessed in 45nm technology and compared to the size of its mask-defined equivalent -- the via. The results indicate that the cell area overhead for the SRAM PIC is over 30X when compared to a via; and this does not include the additional circuit overhead area needed to program and control the SRAM PIC.
This number had been reported in the industry for many years. A 2007 research paper by Ian Kuon and Prof Jonathan Rose (IEEE Transaction on Computer-Aided Design of IC and System) says this clearly: "In this paper, we have presented empirical measurements quantifying the gap between FPGAs and ASICs for core logic. We found that for circuits implemented purely using the LUT based logic elements, an FPGA is approximately 35 times larger and between 3.4 to 4.6 times slower on average than a standard-cell implementation."
This high programmability overhead suggests that many of the current ASIC designs cannot be replaced by their FPGA equivalents. Consequently, when advanced technology NRE is too high, the alternative is to use older node ASIC technologies. Since the number one driver for cost of mask-sets and NRE is the associated capital, the cost of older technologies goes down dramatically over time. The 30X area penalty means that one could use a node that is five generations older and have a competitive solution when compared to a current-node FPGA. Taking into account the 60% gross margin of the FPGA companies, along with the overhead of using a fixed-sized device of an FPGA family rather than a custom tailored Standard Cell device, these could compensate for an additional two nodes. Looking again at the design costs as illustrated in the Xilinx chart above, we can see that at 180nm the design costs are pretty low and the mask set costs are too small even to register on the chart.
What has really happened is that many designs chose to use older node standard cells instead of an FPGA. In his last keynote presentation at the Synopsys user group (SNUG 2014), Art De Geus, Synopsys CEO, presented multiple slides to illustrate the value of Synopsys newer tools to improve older node design effectiveness. The following chart is one of them; on its left-hand side it includes the current distribution of design starts. One can easily see that the most popular current design node is at 180nm. Clearly, even such an old node provides a better product than the state-of-the art FPGA.
From this, we understand why the escalating mask-set and NRE costs have not resulted in a surge of FPGA designs, but rather have pushed designers to user older technology nodes that had depreciated enough to make their NRE cost less of an issue. The following chart of Design Starts per Node by IBS was recently presented in a Synopsys article: The new landscape of advanced design. It shows the design starts trend over time and, not surprisingly, indicates that designers migrate to more advanced nodes over a longer time; also that the up-and-coming node these days is just 65nm.
Design starts per year (Source: IBS Dec 2012)
(Click here to see a larger image.)
As I noted earlier, most analysts now accept that 28nm is going to offer the lowest cost-per-gate for many years to come. There are potentially many implications of this change in Moore's Law. One of those implications could affect the future of FPGAs.
Traditionally, FPGAs have been, and still are, a technology driver for new logic technology nodes. This early adoption gave the FPGA customer a constantly better programmable platform for their designs. Now that dimensional scaling does not provide better cost, it will result in a build-up of pressure for FPGA customers to use a depreciated technology node as an alternative. Over time, designers will see the NRE of 65nm going down to about what the 180nm NRE is today. Comparing a 65nm Standard Cell design to an FPGA of 28nm suggests that far more designs could be better off with Standard Cell. As 20nm and 14nm FPGAs will not provide a better cost than their 28nm predecessors, this means that the FPGA market could see a growing challenge in the coming years.
To Page 3 >