"Limits related to energy and power appear much looser and leave more room for improvement," says Markov. "And there are powerful ways to circumvent the assumptions behind these limits, which raises further research questions."
According to Markov, there are dozens of limits that, in fact, are unsurmountable but are so loose that they will not necessary affect computing at all — physics principles like Planck's length (which sets the fundamental limit on measurement), the Bekenstein limit (the maximum amount of information required to describe a physical system down to the quantum level), and the Schwarzschild radius (the radius of a sphere, which if smaller becomes a black hole).
"Some are relevant, but have some gotchas. For example, P!=NP" — whether problems whose solutions can be quickly verified by a computer can also be quickly solved by a computer. (P!=NP is one of the seven Millennium Prize Problems with a $1 million prize for the first correct solution.)
Using biological computational models like the brain reveals that it distributes memory (synapses) between every computing node (neuron) and is interconnect-limited. However, it compensates by being much more energy efficient -- by using slower switching speeds, low supply voltages, liquid cooling, and a very different power network.
(Source: Human Connectome Project)
As Markov told us:
But P!=NP is conjectured, not proven, and is focused on worst-case behavior. So, one can get around it through domain-specific computing and application-specific optimizations. Comparing Amdahl's Law [which predicts the theoretical maximum speedup using multiple processors] and Gustafson's limits [that computations involving arbitrarily large datasets can be efficiently parallelized] in parallel computing, and observing ongoing industry developments, one can see a type of natural selection, the survival of applications fittest for parallelism and those less affected by fundamental limits.
Other limits affecting chip building are not directly related to the size of devices, but to how they can be more efficiently interconnected — called the "tyranny of interconnect" by Markov.
Based on the speed of light, minimal physical size of a computing element, and the number of available dimensions, it bounds the speed-up you can get from parallel computing if you pack computing elements into the available space, and shows that many promises in parallel computing are unattainable if you happen to live in two or three dimensions. But it also shows that you can do more in three dimensions than in two dimensions, and the improvement is asymptotic. The important part is that stacking 2D layers does not give you such an improvement — you have to scale in the third dimension, just like you scale in previous dimensions.
Another aspect of speeding up computations is new materials, which can make interconnects faster and more energy efficient.
For example, carbon nanotube transistors provide greater drive strength. Even with metallic wires, this can simplify interconnect buffering, reduce wire delay, decrease energy consumption and the footprint of the entire circuit. On the other hand, fundamental limits tend to equally apply to new and existing technologies, so it is important to understand them before promising a new revolution in power, performance, etc.
Another approach that may forgo Moore's Law is emulating natural systems, which seem to often work better than engineer-designed semiconductors despite having limitations that are at odds with Moore's Law.
Biological systems are also subject to fundamental limits. For example, we know that human brain connectivity is 3D, and individual "devices" are quite big and slow. Just like modern integrated circuits, brains are interconnect-limited. The brain is much more energy efficient. It uses lower switching speeds, low supply voltages, liquid cooling, and a very different power network. It also needs to rest and chemically clean itself — the significance of this to computing is unclear.
We also know that the brain is very disappointing as a general-purpose computer. It can't multiply many 64-bit numbers per second, can't copy stored information in bulk, and can't be thinking a hundred thoughts at once (texting while driving is illegal for that reason). However, the brain is a great multimedia processor, handles uncertainty well, and is capable of intuition, creativity, and other types of high-level reasoning. Figuring how this is done leaves researchers more than enough work.
In the end, Markov suggests that when a specific limit is approached, the key to circumventing it is understanding its assumptions. For example, the International Technology Roadmap for Semiconductors (ITRS) should add the analysis of limits to make its predictions more accurate. After all, the ITRS initially predicted that the 45 nanometer node would run at 10 GHz speeds, a blunder that Markov suggests could have been circumvented by paying attention to the fundamental limits on energy resources, power dissipation constraints, and energy waste.
— R. Colin Johnson, Advanced Technology Editor, EE Times
Remember, those early days had problems not just with lithography. They were learning a lot about materials, too, and the scaling down produced multiple changes in the understanding and behavior of devices. In 1975 I'm not sure if they had even discovered the problems of sodium traces in equipment poisoning the integrated circuits. They had to learn to grow uniform thin layers of many materials, change to copper wires (copper is also tricky if it gets in the wrong places), invent new kinds of insulator, investigate various regimes of impurity, .. etc. Every generation of shrink has been a broad learning problem, it is not simply optics.
2. At the early stages(1975-1997) - where research was cheaper than today - It might have been the economic tradeoff, or it might have been the regulatory effects of moore's law. We'll need to dig deeper to know which is which. But maybe one hint that we could've done better is the fact that there were 20 companies all keeping up with moore's law, versus today's few companies struggling to keep with the law, being late ,etc.
3. Yes i do believe we could have build useful stuff earlier. If you look at the applications people had at the research level at 1975 or earlier in basic form, you'll see lots of the stuff we use today: computer mouse/windows/printers/games/personal computers/3d cad/simulation/high-level-languages. I'm sure visionary people at that time have seen the potential.Surely more transistors would have greatly helped.
As to the question of how - At the time they had the ibm system/360 , so i guess they could've managed to design an interesting chips with a few million transistors(which fits 250nm) ,even if it's mostly lots of memory(maybe with a basic cache) and wider buses and fast transistors and floating point ALU's - stuff known at the time. And reasonable charactereization of transistors seems possible at that time.
1) it is definitely an observation not a law (like Boyles law) or a theorem (like evolution)
2) it is a tradeoff between economics and technology ie essentially balancing 'how much better do we need to make something in order to sell it?' and 'how much will it cost to make that change?', with a liberal complication of how long it takes to make the change versus where the market expectation will be at the time
3) so yes we could have stepped processes at a faster pace BUT the cost of the steps would have been too high to justify taking them. Who needed a microprocessor with a billion transistors in the 1970's, for instance? If no one needed it then there would be no point in getting the technology necessary to produce it at that time. Alternatively if you produced a 4004 on a modern process it would be so tiny that would in itself cause problems for instance it would not be able to drive any sensible load at the 15v it was supplied with.
4) look at the costs of going to larger wafers today - we are delaying going to 450mm because the incremental cost is too high - and that change looks to an outsider as if it is relatively simple but it does mean redesigning every machine that carries the wafers.
Zvi, i want to ask you a historical question as a semi expert:
I read somewhere, that in the 50's we could have had 250nm/180nm. It wasn't that far technically. But moore came, set the pace of the industry with his law and we got 250nm only in 97. Does it make sense ?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.