Along with many in the industry, we were pleased to see the release of Intel's 14nm technical information on August 11. It does appear that -- after an extended delay -- the 14nm process node is coming, and with it, some clarity about Intel's 14nm technology.
Clearly, this recent 14nm information release is being presented by Intel to continue the historical trend of cost reduction and dimensional scaling. Undoubtedly, Intel's 14nm technology is a significant technological achievement and deserves full respect and appreciation. Yet, if one takes a closer look at this information -- especially with respect to prior information provided by Intel -- there remains room for some clarification.
The EETimes article referenced above provides a number of comparisons between Intel's 14nm FinFETs with its 22nm process. According to the data provided by Intel, compared to Intel's 22nm process, the 14nm technology will have the following:
- 42nm fin pitch, down .70x
- 70nm gate pitch, down .78x
- 52nm interconnect pitch down .65x
- 42nm high fins, up from 34nm
- a 0.0588 square micron SRAM cell, down .54x
- ~0.53 area scaling compared to 22nm
Let's review the SRAM cell size of 0.0588µm˛. Yes, this is the smallest published size for a SRAM bitcell we have seen thus far. Yet, in our blog Intel vs. TSMC: An Update, we wrote: "Accordingly, the 14nm node 6T SRAM size for conventional dimensional scaling should be 0.092 * (14/22)˛ =0.037 square microns, and if Intel can really scale more aggressively to compensate for the extra capital costs, then their 6T SRAM at 14nm should be about 0.03 square microns or even smaller."
The following slide comes from Intel's 2012 information release:
In the following table we calculated the expected bitcell size for 14nm according to simple dimensional scaling rules based on each of the bitcell sizes for each of the technology nodes in the above 2012 chart:
As we see, the above table indicates that SRAM bitcell scaling has been a challenge for some time, but at 14nm it broke totally away!
The recent Intel presentation argues for the continuation of historical scaling cost reduction to the 14nm node as illustrated in the following slide:
As we see, the graph in the middle shows the exponential increase in wafer cost with scaling; however, the argument made is that the more than 2X increase in transistor density compensates for the increase in wafer cost, resulting in the right-most chart showing a consistent reduction in cost per transistor. However, the following Intel chart does not show a better than 2X density increase from 22nm to 14nm:
Actually, the basic transistor gate pitch indicates only a 1.64X increase in transistor density. Furthermore, this is before accounting for the increase in RC associated with the narrower metal lines, which will require the insertion of many more buffers and repeaters, thereby further reducing the effective density increase.
Returning to the SRAM bitcell. The announced size for the Intel 14nm bitcell as presented above is not going to help offset the increase in wafer cost. It seems this will provide the subject matter for more comments and blogs. However, I see no reasons to change my prior statements published in the EETimes column titled: "28nm – The Last Node of Moore's Law." As always, I welcome your questions and comments.