Quantum computing replace CMOS? Kind of like Samarai Swords replacing Chocolate Cake.
Quantum computing is not a transistor construction and CMOS is not a computing device. Quantum computing may have CMOS components (almost certainly).
Moore's Law is not about bandwidth, clock speed, or transistor type (CMOS, NMOS etc.).
The Law is about doubling density, performance and reduced cost.
As Intel and others reach for the third dimension and explore graphene and other promising materials the door to continue the Moore's Law (actually Moore's Goal) will outlive its critics and naysayers(all of whom think they are realists but in fact are just devoid of any imagination).
If my quick web search is accurate, a silicon crystal's lattice is 5.43 Angstroms wide, or .543 Nanometers. At the 5 nanometer node this means 5 nanometer transistor junctions are: 5 nm / .543 = 9.2 silicon atoms (lattice lengths) wide. At this point, it seems like quantum effects could become critical (tunneling?) as to whether the CMOS transistor would even work, even if the manufacturing is possible. Below 5 NM, any such issues would seem to aggravate rather quickly.
I'm not a semiconductor scientist, but I'm curious what others may have to teach me here.
Are carbon-based (grapheme?) transistors the more plausible way forward?
I have to wonder why we have to have processors with eight cores when (other than AMP which actually works so no one will apply it) we don't really know how to fully use two. I also don't know why we can't have usage meters that instead of showing "cycles CPUs were kept busy" shows actual throughput improvement OVER a single processor, nor why we can't have anyone championing a dev environment and new languages that actually not only support multithread (and threadsafe by design) but also the memory management support for it that is so badly needed. But hey this industry never lost any sleep before knowing the king was stark naked but not daring to say anything about it...
I think your viewpoint is to human-centric. Yes, at some point there will be enough bandwidth to satisfy all my possible entertainment desires, but (even today) machine-to-machine bandwidth use is growing beyond that used by people. What do machines have to talk about that requires petabits? I have no idea.
I remember way way back when the "pundits" were talking about "hitting the wall" with the new 16Kbit RAMs (because of error rates)! The ultimate limit for current technology IS approaching this time, BUT the emphasis is on CURRENT. The limit would be at one bit in a single electron, although even that could (a la latest flash etch) be worked around somewhat with multi-level techniques. That would get close to the Heisenberg limit pretty quickly. As others above have pointed out, there are non-electronic techniques (I'm sure not everything under the sun hasn't already been invented) that will have their own limits, but they may be orders of magnitude better than today's bleeding edge. There WILL be an ultimate limit (think about applying Shannon's theorem here). There also seems to be some confusion here between bandwidth and channel speed vs. geometric limits. That's applying the limitations of CURRENT tech and architectures to technologies unknown!
I've no doubt that Samueli's right as far as he goes, but current CMOS tech is not the only option. Similar predictions were made in the early 80's on the basis of optical mask limits.
There have been doomsayers about Moore's Law since the day it was formulated. They'll be right when the **investment** in scaling stops, and I don't think anyone sees that happening soon.
It wasn't too long ago that "64K of RAM is enough for any application" (and no, it wasn't Bill G. who said it in any form.) Now 64 GB of RAM is an option. 6 orders of magnitude in about 20 years...