- The last node with classical (dennard) scaling was 130 nm. Beyond that it was required to change the device.
- The last node where the price per transistor was reduced by scaling apparently is 28 nm.
What exactly makes moores law? It's definitely possible to go beyond 28 nm. On the other hand it is certainly also possible to introduce cost reduced variants of the 28 nm node to drive the economical side further.
Survey bias can occur even with sophisticated responders. I would like to hear the arguments from those who think Moore's Law is already dead at 28 nm. If there were no economic advantage to going smaller than 28 nm, then why did anyone bother to do it and to make those huge investments?
28nm is the last node of Moore's Law. I wrote a full length blog why it is so base on the avaiable open information - <http://www.eetimes.com/author.asp?section_id=36&doc_id=1321536&>
As to the question why people are still going for 20nm and 14nm I don't have a good answer. Some justify it for the lower power and higher speed that those noodes provide. Moore's Law is stricly about lower cost and that stop at 28nm.
There is too much association with lithography, and that is definitely hitting the wall abruptly, as now even EUV would require (at least) double patterning. Moore's law in the product functionality sense could go on, enabled by other technologies. But we need to free ourselves of the yoke of scaling silicon.
I'm no expert, but if i do (22/14)**2 it makes 246% meaning you can pack 2,5x more transistors on the same 300mm wafer; so even with a 20% higher cost per transistor, there's still a very respectable margin ?
When I was interviewed by a semiconductor company in December 1999, their stock was increasing rapidly. I was the only one to raise my hand and ask: "We're all engineers here. We know that physically, these exponential phenomena end up tapering at some point. When do you think this will end?" Their answer was never, and the stock market's answer was less than a month later (in fact, it took a nice dive).
I seriously think that the trend will simply decelerate as the physical complexities get in the way. People have been very smart, so I don't expect the current challenges to prevent improvements, but I also do think that it will slow the pace down.
There are some theoretical limits - at least within the bounds of physics as we know it today. Thermodynamics sets some limits on the rate of computation, while quantum mechanics sets some limitation on how fast information can be conveyed into and out of a computer.
Scientific American discussed this in 2011. Scientific American refers to the 1982 paper by Charles Bennett(no relation), which gives the thermodyamic basis of computation, but no prediction of the end of Moore's Law. A more recent paper on the thermodynamic aspects reportedly suggests Moore's law has 60-80 years left.
This 2000 paper by Seth Lloyd, considering the limits imposed by the speed of light, the quantum scale and the gravitational constant, is more optimistic, suggesting we had up to 250 years of Moore's law left.
"Moore's Law" isn't a law at all. It's a prediction. Ohm's law is a real law in that it has been proven and so far is irrefutable. I would even say that laws enacted by governments aren't really laws because they are refutable by the courts and can be reversed. Those "laws" are more like rules.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.