“One of the fundamental benefits of Moore’s Law is smaller feature sizes, primarily to get lower cost per transistor so we can do more things” in a similarly sized chip, he said.
Intel already announced it has started making in volume chips using a 14 nm process at a lower cost per transistor than its prior 22 nm generation. It also said it is in development of a 10 nm process that it believes will deliver lower cost per transistor.
One point that was not, perhaps, stressed enough is the huge energy saving at the system level enabled by monolithic 3D.
We all understand the energy savings because of shortened interconnect in 3D layers. Yet it seems that for large system this saving is limited to about a factor of 2 at best. Actually, it is potentially much more. For large-scale processing such as HPC, most of the energy goes to shuttling data off-chip across multiple processors and multiple memory subsystems. One cannot assume multiple layer HPC processors, as the heat -- even if sinked across layers -- needs to be dissipated. But one can imagine a 3D layer of processor and memory subsystems sandwich that is arrayed on a huge wafer-sized chips. There is almost no off-chip driving of data in such an array, the memory is very close to the processors, and the "only" problem is reliability (Amdahl growls here :-). Yet having a 3D redundancy layer that is able to correct for hundreds of faults in the processor logic, neatly works around the reliability issue. And power has a much larger area to be dissipated from.
In other words, new 3D-enabled architectures could allow for almost inifinitely-sized chips, overcoming the biggest energy barrier in HPC and exascale computing.
3D Guy, I am glad that you like the term - Moore's "lag" - but the credit should go to Max Maxfield (The EE Times editor).
As to the VC returning to the semiconductor space let me make the following points:
A. For VC investments it takes years before real high volume is resulted, so even if you are correct about these technologies being niche there is no contradiction there
B. The escalating chip designs cost associated with dimension scaling drove out the VCs from semi. Once the market will develop alternative technologies to add value cost of masks and all other NRE related cost will trend down and with lower investment requirements more VC would consider again these type of ventures
C. If you believe that IOT and wearable are anything close to the many trillions of dollar that Cisco and other are forecasting than you have to agree that vibrant venture activity is a must which lead to the kind of environment that VCs are part of.
Packaging has an important role as off-chip interconnect is 1,000x worse than on-chip interconnect. Any improvment would impact the end system power and performance BUT keeping the cost down has been the problem so far.
At his EE Live! 2014 keynote, "Bunnie" Huang talked about how slowing Moore's Law helps small developers, especially Open Hardware teams. It used to be that by the time such a team succeeded in shipping a product, standard PCs would have leap-frogged them in performance. With Moore's Law slowing, small developers have a better chance to ship products while they are still relevant.
Here's an EE Times article with more detail: www.eetimes.com/document.asp?doc_id=1321796
Gondalf: Monolithic 3D can provide more than just one node of scaling. Both university and industry studies show it...the challenge has been how to make it. The upcoming S3S conference has some papers suggesting simple ways to get monolithic 3D. There is a good bit of cost savings too; the footprint is 25% of the original and the total silicon area is 50% when folding a logic design into two layers. Why? Mostly repeaters/buffer savings and transistor sizes....average wire length in the chip goes down. This blog goes thru some of the details (www.monolithic3d.com/3d-ic-edge1). And both layers are mono-crystalline silicon...with layer transfer the cost of the top layer mono-Si is amortized over the 10-20 times you use the donor wafer. Bottom line...it looks just like a node of scaling. Plus, designers/EDA now have another degree of freedom to exploit for compact and efficient architectures. Also, why not mix layers? Two logic layers (for logic redundancy), then one or more memory layers (maybe NV), then two more logic. Cool on both sides if needed.
Yes, 3DIC in general has to deal with improving the heat conduction. Heat removal is a matter of getting a high enough lateral and vertical conduction to a sufficient heat-sink to overcome the operational heat generation. This blog talks to this with reference to an IEDM2012 paper by Stanford (www.monolithic3d.com/blog/can-heat-be-removed-from-3d-ic-stacks) on how to do that. Lateral conduction for the monolithic 3DIC case is solved by rigorously using the Vss/Vdd network to move the heat laterally as if the 2nd layer 'substrate' is bulk Si, and the vertical conduction is taken care of by the high density of available vertical 'heat pipes' of monolithic 3D...10e6-10e8/cm2. The IEDM work ran both layers (substrate and monolithic 2nd layer) really hard and hot, and reasonable cooling was accomplished with the interlayer vias and power grids. The larger issue was getting sufficient heat sink capability...had to go to liquid cooling to get all the watts out of the stack.
Zvi, Well researched article, as always. I like the term "Moore's Lag". Very catchy. You mention "Moore's Lag" will cause:
(1) Innovation into new technology and VCs will come back to invest in the industry.
(2) While (1) may happen, I think we should stay awake to the possibility that new technology like SOI and subthreshold and others will remain niche for many more years and even if used, will not provide the long-term benefits scaling used to. Net result: Semiconductor technology will get even more commoditized and differentiation will happen at higher levels (eg. at the system and application levels). The VCs will continue to stay away from new semiconductor stuff, both because of its commoditized nature and because of the huge amount of investment needed for adequately proving out any idea and making money off it.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.