We have been hearing about the imminent demise of Moore's Law quite a lot recently. Most of these predictions have been targeting the 7nm node and 2020 as the end-point. But we need to recognize that, in fact, 28nm is actually the last node of Moore's Law.
Beyond this point, we can continue to make smaller transistors and pack more of them into the same size die, but we cannot continue to reduce the cost. In most cases, in fact, the same SoC will actually have a higher cost!
The famous Moore's Law was presented as an observation by Moore in his 1965 Electronics paper "The future of integrated electronics," in which he said:
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.
Clearly, Moore's Law is about "The complexity for minimum component costs," and the minimum component cost will be at the 28nm node for many years, as we will detail in the remainder of this blog.
The following chart was presented by ST's JoŽl Hartmann (EVP of Manufacturing and Process R&D, Embedded Processing Solutions) during Semi's recent ISS 2014 Europe Symposium:
Hartmann is making the case that the "Moore's Law discontinuation due to cost stagnation or increase" applies to bulk technologies, which is the technology base of the majority of the industry. ST's information is backed by GlobalFoundries as we can see from the following chart presented at the 2013 SOI Consortium workshop in Kyoto, Japan.
The above GlobalFoundries chart shows that the lowest-cost transistor is at the polySiON 28nm node. Beyond 28nm, scaling becomes extremely expensive due to double litho, HKMG, FinFET, etc. The increase in wafer cost is illustrated by the recent NVidia chart from Semicon Japan from December 2013, as illustrated below:
The increase in wafer cost eats away the 2X transistor density gain per node, as is illustrated by the following ASML slide from Semicon West in 2013:
However, the SoC end product silicon area is dependent on the SRAM bit cell size far more than on the general transistor density. This is the fundamental challenge now facing dimensional scaling. SRAM bit-scaling has been dramatically slowed beyond 28nm. At 28nm, the bitcell size is about 0.12µm≤. The following chart by IMEC was reported in "Status update on logic and memory roadmaps" in October 2013:
Beyond 28nm, the SRAM bit-scaling rate is about 20% per node instead of the historical 50%. And the situation is actually far worse as is illustrated by the following chart, which was presented at ISSCC 2014 in an invited paper by Dinesh Maheshwari, CTO of Memory Products Division at Cypress Semiconductors. It was also at the center of our recent blog, "Embedded SRAM Scaling is Broken and with it Moore's Law."
Accordingly, the SRAM Mb/mm≤ scales far less than the bitcell due to factors such as:
- Smaller transistors have less drive, thus requiring breaking the SRAM into smaller blocks, thereby creating more overhead area costs.
- Smaller transistors have a higher level of variation, also requiring breaking the SRAM into smaller blocks.
- The need for more overhead such as read assist circuits and write assist circuits.
- Tighter metal pitches begat higher RC, once again requiring breaking the SRAM into smaller blocks.
Moreover, SoCs need I/O pads and their circuits, and other analog circuitry, all of which scale at a rate far less than 2X per node. Furthermore, the exponential increase in BEOL RC results in an exponential increase of number of drivers and repeaters, as is illustrated by the following chart, which was presented by Geoffrey Yeap, VP of Technology at Qualcomm in his invited IEDM 2013 paper. This suppresses the effective gate density increase to a factor of only X1.6, or less.
Summarizing all of these factors, it is clear that -- for most SoCs -- 28nm will be the node for "minimum component costs" for the coming years. As an industry, we are facing a paradigm shift because dimensional scaling is no longer the path for cost scaling. New paths need to be explored such as SOI and monolithic 3D integration. It is therefore fitting that the traditional IEEE conference on SOI has expanded its scope and renamed itself as IEEE S3S: SOI technology, 3D Integration, and Subthreshold Microelectronics.
The 2014 S3S conference is scheduled for October 6 through October 9, 2014, at the Westin San Francisco Airport. This new unified conference will help us to improve efficiency and establish this conference as a world-class international venue to present and learn about the most up-to-date trends in CMOS and post-CMOS scaling. The conference will provide both educational and cutting edge research in SOI and monolithic 3D and other supporting domains.
These technologies were not part of the mainstream semiconductor past. Accordingly, it is a golden opportunity to catch up with these technologies now. Please mark your calendar for this opportunity to contribute to and learn about SOI and monolithic 3D technology, as these technologies are well positioned to maintain the semiconductor industry's momentum into the future.