Since 28 nm is the first HKMG generation, it looks like a lost opportunity there, especially with Si/SiON in parallel. With the extra costs for dummy gate replacement by two different metals, the foundry cost reduction trend reversal already starts at 28 nm with HKMG, and is continued successively with double patterning and FinFET process.
Eyetalian Staliion: I simply disagree on a number of counts:
1) The price of 14nm is too high currently, just like you said. But my emphasis is on "currently". Five years from now, it will be less than half of the current prices. That's what the FPGA vendors are banking on. All foundaries will imrove the yields.
2) 14nm offers better performance and static power trade-offs than 28nm; this is because the low-leakage properties of finfets.
3) Believe it or not, 14nm is also 10% to 20% faster than 28nm. You may consider that disappointing, but the additional performance is there if you care for it. If not, go for very low static power, that's what finfets are for.
4) ASIC tools for 14nm and 10nm suck. Why? Two reasons. (a) It is too early in the game. Tools will get better with time. (b) ASIC vendors do not invest heavily in that stuff: simply because they expect very few customers for the first five years. (In fact for the first couple of years the only cutomers will be Intel and the FPGA vendors.) This is precisely why FPGAs do so well. They will invest in the tools. The FPGA tools are better than ASIC tools any time, but especially in the early days of a brand new process node.
The situation is, unhappily, more complex. It's not just a matter of maturing a process node to drive down cost. After 28nm, the 3P improvement curve (price/cost, performance and power) simply broke. You could no longer expect to get an automatic incremental improvement in all three factors simply by moving your design into deeper submicron. In fact, designers across the chip industry have been discovering, to their great frustration, that no matter how intelligently they design and how much money they spend on tools, any improvement in two of the factors worsens the third.
There is an additional complication. A stunning number of even highly complex designs are finding themselves pad-limited in the deepest submicron nodes and simply don't have anything pressing to shoehorn in to fill the white space. There's already tons of memory on SoC's and ASICs, so adding even more isn't all that useful.
The reason SoC designs aren't finding other interesting things to integrate is because of the system vendors, who are themselves stagnating in terms of marketable innovations. No matter where you look in the three C's (Communications, Computing and Consumer), the extended weakness in consumer discretionary income is negatively affecting sales of smartphones, tablets, HDTV, home gateways and all other consumer electronics. This in turn negatively impacts demand for wireless backhaul, thus reducing the need for new blades or storage in datacenters.
To grow, chip companies are not only going to have to look for applications outside of the three C's and mobile, but also start looking at new process technologies that overcome the deficiencies of silicon - deficiencies such as flexibility, durability and power consumption that have hampered the growth of a multitude of applications in industrial materials, textiles and sensor markets.
history has proven that over time cheaper and easier to use trumps more expensive and harder to use (even if the latter is more powerful). That's the Innovator's Dilemma. I do believe that Lattice would be able to disrupt the FPGA market with opportunities such as Project Ara (Google picked them, not Altera, not Xilinx, for a reason) and IoT.
Regarding the difficulty of programming FPGAs, I think that we (Synflow) could help. We have created a C-like language for hardware design (cycle-accurate and bit-accurate) called C~ that makes it much easier/faster to design. It's supported with an Eclipse-based IDE and free for personal use. We hope to create a community around C~ (including open-source IP cores) with the goal of replacing Verilog/VHDL for hardware design. Feel free to check it out!
@betajet: I agree with your comments about FPGA tools and methodologies - at least for Xilinx because that's all I've ever used. Right from early 90's Xilinx had the best tech support among all companies I'd seen (even Synopsys was not as good at that time) - with their detailed documentation, tech support centres, superior app notes. I'd always admired this in Xilinx and felt that the excellent quality of their tech support was the main reason why they were hugely successful.
However I was disappointed recently, when we wanted tech support on a particularly vexing tool problem, to hear that direct Xilinx tech support is now only available for "special partners" and other customers would need to go to their distributors (who are not always equipped for it). We are still struggling to solve the issues we face. I also remember seeing somewhere that Xilinx's financials/leadership position is not as stellar as it used to be! Wonder if their decline is due to poor tech support or has their support deteriorated due to poor financials?
@betajet: Keeping the bit stream/configuration details secret just adds to speculation that there is some hidden magic. The real problem is that every flip-flop and every LUT connection has at least 2 SRam cells in/out. Every connection between vertical and horizontal wwire is another cell. The "wiring delay" has caused Xilinx to launch a new design tool suite. The wiring delays swamps the circuit delays.
Horizontal micro-code has been used for DSP successfully and can be used for all controls. Block memories with a few LUTs can be used for true programmable control logic without all the wiring and timing closure effort in the current methodology. The access time for the block memory and LUTs is independent of the logic function beibg evaluated unlike gates where everything more complex than and or or requires more than 1 level.
Only the primitive peripheral logic needs to be discrete FFs and LUTs because of the unigue interfaces.
I thought we were going to see 'true innovation' from the likes of Achronix. It appeared that they were going to provide us with a programmable device that had amazing performance by utilizing asynchronous design techniques. However, when I look at their datasheets I can't tell how they are any different than Xiltera.
Maybe an FPGA company can provide us large islands of clockless logic and the tools to build something useful with them?
How about useful internal hard macrofunctions beyond Multipliers, DSPs and FIFOs? Large CAMs, barrel shifters, and other things that take up a large chunk of LUT/routing resources come to mind...
Doug asked: Also, what are the FPGA companies dreaming up in terms of true innovation that breathes life into programmable logic?
From what I read here at Geek Times, it seems that Xilinx and Altera are mostly making their FPGAs even more expensive and trying to use marketing to convince customers that putting more functionality into programmable logic is a solution to their problem. My ambiguous use of the word "their" is deliberate.
I'm seeing innovation from Lattice, especially the Silicon Blue devices. (I've looked at the architecture but haven't designed with them yet.) There is a need for small, really cheap FPGAs. I've used Actel -- now Microsemi -- ProASIC3 and Igloo and they're good, but pretty expensive. Lattice could very well grow its market share by coming up from the bottom -- I'd keep an eye on them.
IMO, what's suffocating FPGAs is the fact that no FPGA vendor (except for Atmel, long ago) publishes the internal details necessary for people to write alternate programming tools. This makes it difficult for people to get started creating things with FPGAs, since the available tools and languages have very steep learning curves. Now if you're working with FPGAs all the time you get used to the tools and after you've learned the various tricks FPGA design becomes easy and you don't appreciate what a new user faces. But if FPGA vendors want to get new users, that learning curve has got to come down. Since their marketing keeps saying how easy it is to design FPGAs, they don't seem to recognize that they have a problem. So I see the only solution as opening up the bitstreams so others can breathe life into FPGAs.
I recently watched an on-line discussion as to which was worse, VHDL or Verilog. This was like watching a debate as to whether Fortran or Cobol was worse for writing an operating system, with the assumption that the CPU manufacturer didn't document the machine language so you could only program your CPU using one of those two languages. IMO, that's where we are with FPGA design. Stifling, isn't it?
I'd like to see some predictions of how Toshiba's "Fit Fast Structured Array" will impact FPGA growth. These seem to be pin compatible FPGA replacement gate arrays with very low NRE. If this technology takes off, FPGA volumes could be impacted dramatically.
Also, what are the FPGA companies dreaming up in terms of true innovation that breathes life into programmable logic?
A Book For All Reasons Bernard Cole3 comments Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...