One way to save Moore's Law from an unpleasant and industry-disrupting demise is for manufacturing process technology developers to make a series of changes at a given node – say 20-nm – but label each successive change with a smaller number.
In that way double patterning of deep immersion lithography can continue to produce chips that are in processes technologies that are labeled 16-nm, 14-nm, 10-nm and so on, thereby keeping Moore's law moving forward.
And as long as some feature on the chip can be measured at the
appropriate dimension it should be possible to find a way to justify the
[Get a 10% discount on ARM TechCon 2012 conference passes by using promo code EDIT. Click here to learn about the show and register.]
Of course, the IC die-area savings and cost advantages that we have become used to from previous process node transitions would not accrue with these forthcoming node transitions. However, as chip designers at the leading edge are becoming more interested in power savings than area savings as long as the successive process nodes produce ICs with lower power consumption all may be well.
There is the other Moore's law: cost per acre of finished chip remains constant.
In some ways these are dual. As ways are found to reduce dimensions, and if that delivers functional benefits, then there will be a tendency to aim for similar costs to cram more function into the same size.
But as factors like power density, leakage, and other limits tend to eliminate the advantage of making smaller chips we could see a trend to making cheaper chips. After all, the Si only costs a few cents per sq cm. If it starts to make sense to produce each sq cm more cheaply (and tricks like vertical connection allow us to package them small and keep connection distance low) then we will continue to see increased functionality at falling cost.
Just like we have had tremendous functionality and even performance growth since GHz scaling stalled a decade ago, we will continue to see functional and performance growth for a long time even if feature size stalls. The ingenuity and competition will simply shift into other dimensions.
Hi Kris, Here's a link for Gordon's paper, http://av.r.ftdata.co.uk/files/2012/08/IS-U.S.-ECONOMIC-GROWTH-OVER-FALTERING-INNOVATION-CONFRONTS.pdf
In my opinion, Gordon is way to pessimistic and his 100 year forecast just doesn't add up. He sounds like Hansen back in the 1930's who, with the passage of time, was proven wrong as well.
let's remember our history a little better: ML is not solely responsible for getting us this far. lots of design progress has let us use those extra devices: 16-32-64, onchip caches, pipelining, OOO, even multicore (surely the least creative way to sop up the area/gates!) what's really changed is that devices/area isn't the main concern any more. faster is always better, but now power efficiency is the primary driver (not that it was ever far from the front!)
The nature of Moores Law scaling is such that many of the proposed substitutes just don't cut it. Any incremental and most likely one time gains from 3d chips or carbon nanotubes or anything like that, pales in comparision to the gains in going from 180nm to 90nm to 45 nm etc.
Economicly this will be bad and we will go into a tech "dark ages" as there will be few new developments to spur investment and economic growth. Simply put why develop a new chip, if the new one cannot offer any more features? It ripples from there into vast swaths of the economy. We may get a short term bump in EE employment as the big players try to out design each other, but in the end the gains from doing that will by minimal. I think Intel knows this and that is way they are diversifying via there foundry ops to grap as much of the market share as possible when we get to the end.
Some of the predictions on this board seem overly dire to me. While the growth of vanilla CMOS ICs may slow somewhat, human imagination is not bound by Moore's law. There will be new technologies and new applications of existing technology to drive the economy.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.