I am one of those who is old enough to remember similar statements about 1.0 micron, 0.35 micron, and 0.18 micron CMOS technologies. The past claims that these technologies were going to be the bread-and-butter of the industry and provide lower cost for years to come has proven false. I predict that the same thing will happen to 28nm too.
The march to smaller technologies may slow down a bit, but it will continue. Today 14nm chips may be more expensive than 28nm chips due to yield issues. But that will change in a few years (less than five!) The mask costs will always be high, but the mask costs are irrelevant to FPGA vendors. They only care about wafer costs and yield. The mask costs hurt ASIC starts, but they have very little impact on FPGA development or FPGA pricing.
As the title suggests the author is focusing on use of FPGAs for production quantities - which may be dominant segment for FPGA companies in terms of revenue. I can see how the trade-offs work out between FPGA and an older node make sense for production use. The suggested solution seems to target this.
Where FPGAs are used for ASIC prototyping the concern we face is performance slow-down which prevents us from running a real-time prototype. E.g. prototyping an ASIC targeted to 40nm std cell design when prototyped on an FPGA device using 28nm, I have seen slow down of 10X to 30X. Are there any efforts on to solve this issue?
Just published EE Time survey <http://www.eetimes.com/document.asp?doc_id=1322014> report an interesting trend: "FPGA use is trending steadily down from 45% six years ago (not shown) to 31% last year," - it seems supporting this blog conclusion
"In this paper, we have presented empirical measurements quantifying the gap between FPGAs and ASICs for core logic. We found that for circuits implemented purely using the LUT based logic elements, an FPGA is approximately 35 times larger and between 3.4 to 4.6 times slower on average than a standard-cell implementation." [emphasis mine]
Most contemporary FPGAs have a lot of specialized logic. These parts contain components such as memory blocks, Multiply-Adders, high-speed PHYs/MACs, etc. Altera and Xilinx even make parts that combine what's essentially an ARM SoC with a programmable fabric. On designs that can benefit from these components, like signal processing chains, the FPGA penalty is considerably lower.
You suggest mask costs as a driver for a switch to FPGA, but your chart does not show mask costs to be dominant. The "design" part of the chart shows the most dramatic growth as designs scale down. Why is that, what are the contributions to the rise in design cost? How do the costs break down in FPGA vs. other semis? Have design costs scaled less for FPGA?
I would think that time-to-market would be a bigger driver than mask costs. After all, for FPGAs to become a large part of the market they would need to be made in large volumes, at which point their larger size and cost per function would easily overhang the mask costs.
Time to market can be very expensive in terms of opportunity cost. Plus, the big SOCs these days have everything and a kitchen sink thrown onto the die - just power down the parts you do not need. This competes indirectly with FPGA, where you also buy oversized silicon in order to get just what you need. Finally, in a low power world where mobile devices of all kinds are the growth drivers, are FPGA coming in at low enough power, even if they ace the rest of the problem and you can reach market fast?
I would guess that the strongest parts of the market for FPGA are things which FPGA excel at, like reconfigurable signal processing, or system prototyping, or low to mid volume specialized pipelines. The things they have always been good at, rather than displacing other segments of the market?