As much as one may worry about chip scaling, from a networking standpoint, do we ever wonder when "enough" will be enough?
Network growth has been driven by the need to support ever increasing complexity of data interaction, with requirements now to support real time 2D visualizations essentially. Once you can deliver enough bandwidth to delivery 2 way immersive high resolution 3D to every person, will you at that point have enough bandwidth? That is the most data that any one person can consumer at any given point in time.
I would be interested in others thoughts on what will drive bandwidth beyond what it is possible for humans to consume? Machine to machine?
I think your viewpoint is to human-centric. Yes, at some point there will be enough bandwidth to satisfy all my possible entertainment desires, but (even today) machine-to-machine bandwidth use is growing beyond that used by people. What do machines have to talk about that requires petabits? I have no idea.
Another angle to look at this would be the energy consumption which have scaled some what to the low side but that is not enough. If a meaningful and sustainable joule/bit or a suitable figure-of-merit thereof is not met, network bandwidth scaling will stop, inevitably.
To me, the fallacy is to constrain thinking in the context of applications you know or understand. We consume resources differently when they are 'free', now like memory, or storage, or bandwidth. I have faith that new uses will consume cheap bandwidth and gates. Build it and they will come is a corollary of Moore's Law.
there is much bandwidth wsated today due to losses in copper btwn chips. agree with Samueli that we will see a lot more optical communication between chips. would stretch system performance for a few more years even after Moore's Law taps out. .
Most of the components have already been designed, though not together in one chip yet. Avalanche diode detectors, silicon waveguides and other optical components, a few different all-optical switching methods. Even nano-lasers smaller than the wavelength of the light are possible and have been demonstrated. The missing ingredient is a means to cheaply add a layer to create lasers, as Si is an inefficient laser medium. All-optical RAM is also an ongoing issue.
It will start with chip-to-chip comms, but will quickly move to intra-chip data busses. Eventually, ALUs, etc. will be moved to all-optical components. At some point, electronics itself may be replaced with "optronics".
I don't think scaling will suddenly hit a wall, I think it will be a long slow deceleration that has already started. Intel's 14nm FinFet is a very complicated, expensive process, which seems to deliver density but no added performance and no improvement in leakage. It uses a lot of brute force techniques like double patterning. So what are the consequences of a halt in Moore's law? The article discussed 3-d stacking, and optical chip connections. Does that mean the profits of Intel and TSMC will stagnate? That software developers will shoulder the load for performance improvement?
If you look at the design rule specifications Taiwan Semiconductor is releasing and estimated 20 and 16 wafer prices, the Moore's law slow down is in full steam.
Progress will still happen. Just not by moving to designs to 20 and 16.
What is interesting Broadcom CEO must have similar numbers I have seen so this is not some academic. It is real data on Moore's law.
Intel also has a cost problem. They just don't know it since they sell 100 to 1000 CPUs. Intel has never successfully competed in its 50 year history on cost in a commodity market and is in for a rude awaking in mobile
I think Chipmunk is right. Optical chip-to-chip connections could be very important. On-chip components are connected by intra-chip busses. If the off-chip bottleneck is removed by making the chip-to-chip bus the same speed as the intra-chip bus, then the massive integration is no longer as necessary as it is now. Think of a virtual SoC that is underneath several chips glued together by optical interconnects.
The cost of chip fabrication is rising as we speak with double patterning litho required below 20nm.
As for demand, may I remind you of Google Project Glass and other worn computing initiatives coming on as well as the trend to IoT/M2M.
Everything is getting sensed, instrumented, stored and analyzed. This will drive a new level of compute, storage, networking and bandwidth needs over the next 10-15 years as our current CMOS technology sputters.
A few weeks ago, I wrote a blog for the All-Programmable-Planet --APP-- community in which one of the main issues was how Moore's law has started running out of gas in the last years.
It includes some graphics that illustrate that a speed limit has already been reached by analyzing Intel's CPU performance evolution along the time.
If someone is interested, follow the next link:
Moore's Law is not about bandwidth, clock speed, or transistor type (CMOS, NMOS etc.).
The Law is about doubling density, performance and reduced cost.
As Intel and others reach for the third dimension and explore graphene and other promising materials the door to continue the Moore's Law (actually Moore's Goal) will outlive its critics and naysayers(all of whom think they are realists but in fact are just devoid of any imagination).
Agree with Henry. all these times academics who said it is over, now the real people who are doing the job.
Still has 10 years. Do not underestimate our younger generation..they are smarter than us and they will come up with something, perhaps not simple CMOS..
Slow down your drinking check your designated driver (hope he/she is sobber) and cheers/salute for CMOS for all the years of work horse (at least kept me going on my entire carrier).
I have seen a few presentations on quantum computing. In theory, this could replace CMOS and bring a whole new revolution to computing. Or it could be a big fat nothing like high temp superconductors and magnetic memory. It will probably be ten years before they know which one it will be.
Quantum computing replace CMOS? Kind of like Samarai Swords replacing Chocolate Cake.
Quantum computing is not a transistor construction and CMOS is not a computing device. Quantum computing may have CMOS components (almost certainly).
I've no doubt that Samueli's right as far as he goes, but current CMOS tech is not the only option. Similar predictions were made in the early 80's on the basis of optical mask limits.
There have been doomsayers about Moore's Law since the day it was formulated. They'll be right when the **investment** in scaling stops, and I don't think anyone sees that happening soon.
It wasn't too long ago that "64K of RAM is enough for any application" (and no, it wasn't Bill G. who said it in any form.) Now 64 GB of RAM is an option. 6 orders of magnitude in about 20 years...
I remember way way back when the "pundits" were talking about "hitting the wall" with the new 16Kbit RAMs (because of error rates)! The ultimate limit for current technology IS approaching this time, BUT the emphasis is on CURRENT. The limit would be at one bit in a single electron, although even that could (a la latest flash etch) be worked around somewhat with multi-level techniques. That would get close to the Heisenberg limit pretty quickly. As others above have pointed out, there are non-electronic techniques (I'm sure not everything under the sun hasn't already been invented) that will have their own limits, but they may be orders of magnitude better than today's bleeding edge. There WILL be an ultimate limit (think about applying Shannon's theorem here). There also seems to be some confusion here between bandwidth and channel speed vs. geometric limits. That's applying the limitations of CURRENT tech and architectures to technologies unknown!
I have to wonder why we have to have processors with eight cores when (other than AMP which actually works so no one will apply it) we don't really know how to fully use two. I also don't know why we can't have usage meters that instead of showing "cycles CPUs were kept busy" shows actual throughput improvement OVER a single processor, nor why we can't have anyone championing a dev environment and new languages that actually not only support multithread (and threadsafe by design) but also the memory management support for it that is so badly needed. But hey this industry never lost any sleep before knowing the king was stark naked but not daring to say anything about it...
We have processors with eight cores because we hit thermal characteristics which could not be overcome. Since marketing could not longer tout greater hertz, the solution was to tout multiple cores.
An interesting side note is how it is now touted that some cores can be shut down to increase the speed of a single core.
If the eight cores were actually EFFECTIVE then you'd see applications running at eight times native speed which DOES NOT HAPPEN because the software architecture that could support anything even CLOSE to that still hasn't been invented (we used to refer to the "von Neumann limit" to describe the problem)! What we REALLY have is highly effective marketing "hype" and a lot of folks who have no idea what's really going on. There's really no sense putting more cores on a substrate than you can take advantage of, hence I don't see the sense of a lot of "hand wringing" that we can't get 32 CPUs on a chip when we're still learning how to fully use 2.
If my quick web search is accurate, a silicon crystal's lattice is 5.43 Angstroms wide, or .543 Nanometers. At the 5 nanometer node this means 5 nanometer transistor junctions are: 5 nm / .543 = 9.2 silicon atoms (lattice lengths) wide. At this point, it seems like quantum effects could become critical (tunneling?) as to whether the CMOS transistor would even work, even if the manufacturing is possible. Below 5 NM, any such issues would seem to aggravate rather quickly.
I'm not a semiconductor scientist, but I'm curious what others may have to teach me here.
Are carbon-based (grapheme?) transistors the more plausible way forward?
Hopefully Moore's law will get us to the Singularity and then the machines will figure out what to do next. And maybe keep us around as pets, we are kind of entertaining after-all. Not engineers but regular people. ;)
Scaling will continue at some rate forever. And "more than Moore" approaches (like using graphene and 3D structures) may improve device performance over time as well. However, the strict definition of Moores law says we will double the number of transitors every two years. Right now IC technology leader Intel is transitioning from the 22 to 14 nm node. Will they make it in two years? Will the number of tranisistors actually double? And how many different designs will actually be manufactured at this node within the next two years? I would submit that Moores law is likely "breaking" as I write this post, or is already broken.
The annual prediction of the end of "Moore's Law" is part of what motivates the extraordinary efforts to creatively push the inevitable a little further down the track. Semiconductor management doesn't want the fall of "Moore's Law" to occur on their watch. The "law" has become a driving force for innovation - a self fulfilling prophesy.
Am I missing something here? I'm no expert on this, but (as the article states) Moores law relates to the NUMBER of transistors on a chip, not their size. Size is what has driven it thus far, but it seems we're only seeing the beginning of 3D chips. Double the number of layers every couple of years and you'll keep it going a while longer??
Folks, Moore's "Law" isn't, and never was, a physical law, like Newton's Laws, or Ohm's Law. It merely described an historical trend that persisted for a surprisingly long time. Eventually it runs up against practical or even theoretical limits.
though CMOS would still live longer..but at some point, I will largely agree with mr Samueli. Supplement materials to silicon (or in different forms?) are inevitable. And as someone mentioned, architecture has now bigger stake to make performance enhancement in chips.
Plenty of pundits have predicted the end of CMOS scaling before, but rarely veteran executives of well established chip vendors with deep technical understanding.
Hmmm. may I remind you of the dire predictions from all sorts of deep technical experts at IEDM in the late 80s and early 90s about the 1um wall? Something about DUV resists not being transparent and sensitive enough. And then the 0.25um wall because of diffraction. Somehow people managed to print 40nm with 193 light.
As Moore himself said no exponential is forever, but this one has managed to last a lot longer than anyone expected. The end is always 10 years away. Some day that prediction will come true and the person who made it will be declared a genius. In reality they will be one of many who made such predictions, but just got lucky on the timing.
I've been hearing gloom and doom projections about the demise of Moore's Law for a least a decade, and I'm reduced to yawning.
We are already seeing similar issues in another area: we appear to have reached practical limits on the clock speed at which CPUs can be run, so current development substitutes larger address space from 64 bit chips and multi-core chips with parallel processing to get increased performance.
The issues I see aren't technological, they're financial. Steadily shrinking process geometries require increasingly expensive facilities to make the components. We've been seeing steady consolidation in foundries, and an increasing move to "fabless" semi-conductor operations, because fabs have become so enormously expensive that very few outfits can afford to build them, and we're seeing increasing joint ventures to spread costs, even among those who can.
The fundamental question to me isn't "*Can* you do it?", but rather "Can you *afford* to do it?" There are all manner of things that are theoretically possible, but simply cost too much to do, and I think we are approaching that area here.
Let's say we can take Moores law only 15 years forwards. But in Less than 15 years of silicon we got from IBM chess playing machine, to IBM Watson.
In 15 years from today we might be able to do most of the important stuff we want to do with silicon at a Reasonable cost. Not all we want,, but most of it. If so, the hell with Moores law.
Lets be realistic, Moore's law was made with respect to silicon or such semiconductor materials. If we manage to shift to altogether different computing mechanism, then Moore's law is not what we should be speaking of. It should be about performance vs cost.
So pragmatically speaking Moore's law has been decaying since the days gate leakage went past the roof. It will end, be realistic, but that doesn't mean the end of computing.
For people who, checked on Graphene.. I asked this same question to friend of mine who is post-doctoral researcher on graphene at my university. According to him, graphene transistors are not practical.. and is only good for interconnects ( he claims copper interconnects will be replaced) and has some application in making sensors.. In short dont count on graphene :)
Even Gordon Moore has expanded his "law" to other interests. We should follow and explore the next application of Moore's Law, that of DNA sequencing: http://www.eetimes.com/electronics-news/4400684/Moore-s-Law-goes-biotech
What Moore's "Law" originally said was that the number of components that will fit on one chip *at minimum cost per component* doubles every two years (or whatever time).
There's an optimum die size at each process node that minimises cost per gate, depending on density and yield. Cost per gate then depends on wafer cost, which double/triple patterning drives through the roof.
So Moore's law is *already* dead at 20nm, the cost projections even over several years show that cost per gate never falls below 28nm, and the same for 14nm.
The whole industry has been based round the fact that the next process delivers more function for the same price, as well as lower power and higher speed. Once the economic reason to move disappears only power and speed are left, and the improvements in these are slowing down.
Yes it's technically possible to use quadruple patterning to do 10nm, but I don't think anyone thinks it makes sense economically. Without EUV or direct-write (still coming Real Soon Now, like for the last 10 years) it's difficult to see what such processes could be used for -- maybe a few high-margin products (e.g. Intel, Apple) which need billions of transistors or the lowest possible power and are willing to pay a premium for this, but not the vast majority of chips where cost-per-gate is key.
So what if Moores' Law does expire? It doesn't mean the end of semiconductors or improved processing, it just means it won't be coming from simply shrinking a silicon die. I think the economics will run in a different direction before the physical buffers are hit, but that is just my opinion, and most of the above are just others opinions; there are no facts in this yet (well except maybe for Javiers' piece, worth reading).
Blog That A-Ha Moment Larry Desjardin 12 comments Have you ever had an a-ha moment? Sure, you have. The Merriam-Webster dictionary defines it as "a moment of sudden realization, inspiration, insight, recognition, or ...