As much as one may worry about chip scaling, from a networking standpoint, do we ever wonder when "enough" will be enough?
Network growth has been driven by the need to support ever increasing complexity of data interaction, with requirements now to support real time 2D visualizations essentially. Once you can deliver enough bandwidth to delivery 2 way immersive high resolution 3D to every person, will you at that point have enough bandwidth? That is the most data that any one person can consumer at any given point in time.
I would be interested in others thoughts on what will drive bandwidth beyond what it is possible for humans to consume? Machine to machine?
there is much bandwidth wsated today due to losses in copper btwn chips. agree with Samueli that we will see a lot more optical communication between chips. would stretch system performance for a few more years even after Moore's Law taps out. .
I don't think scaling will suddenly hit a wall, I think it will be a long slow deceleration that has already started. Intel's 14nm FinFet is a very complicated, expensive process, which seems to deliver density but no added performance and no improvement in leakage. It uses a lot of brute force techniques like double patterning. So what are the consequences of a halt in Moore's law? The article discussed 3-d stacking, and optical chip connections. Does that mean the profits of Intel and TSMC will stagnate? That software developers will shoulder the load for performance improvement?
The cost of chip fabrication is rising as we speak with double patterning litho required below 20nm.
As for demand, may I remind you of Google Project Glass and other worn computing initiatives coming on as well as the trend to IoT/M2M.
Everything is getting sensed, instrumented, stored and analyzed. This will drive a new level of compute, storage, networking and bandwidth needs over the next 10-15 years as our current CMOS technology sputters.
A few weeks ago, I wrote a blog for the All-Programmable-Planet --APP-- community in which one of the main issues was how Moore's law has started running out of gas in the last years.
It includes some graphics that illustrate that a speed limit has already been reached by analyzing Intel's CPU performance evolution along the time.
If someone is interested, follow the next link:
Agree with Henry. all these times academics who said it is over, now the real people who are doing the job.
Still has 10 years. Do not underestimate our younger generation..they are smarter than us and they will come up with something, perhaps not simple CMOS..
Slow down your drinking check your designated driver (hope he/she is sobber) and cheers/salute for CMOS for all the years of work horse (at least kept me going on my entire carrier).
If you look at the design rule specifications Taiwan Semiconductor is releasing and estimated 20 and 16 wafer prices, the Moore's law slow down is in full steam.
Progress will still happen. Just not by moving to designs to 20 and 16.
What is interesting Broadcom CEO must have similar numbers I have seen so this is not some academic. It is real data on Moore's law.
Intel also has a cost problem. They just don't know it since they sell 100 to 1000 CPUs. Intel has never successfully competed in its 50 year history on cost in a commodity market and is in for a rude awaking in mobile
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.