As much as one may worry about chip scaling, from a networking standpoint, do we ever wonder when "enough" will be enough?
Network growth has been driven by the need to support ever increasing complexity of data interaction, with requirements now to support real time 2D visualizations essentially. Once you can deliver enough bandwidth to delivery 2 way immersive high resolution 3D to every person, will you at that point have enough bandwidth? That is the most data that any one person can consumer at any given point in time.
I would be interested in others thoughts on what will drive bandwidth beyond what it is possible for humans to consume? Machine to machine?
there is much bandwidth wsated today due to losses in copper btwn chips. agree with Samueli that we will see a lot more optical communication between chips. would stretch system performance for a few more years even after Moore's Law taps out. .
I don't think scaling will suddenly hit a wall, I think it will be a long slow deceleration that has already started. Intel's 14nm FinFet is a very complicated, expensive process, which seems to deliver density but no added performance and no improvement in leakage. It uses a lot of brute force techniques like double patterning. So what are the consequences of a halt in Moore's law? The article discussed 3-d stacking, and optical chip connections. Does that mean the profits of Intel and TSMC will stagnate? That software developers will shoulder the load for performance improvement?
The cost of chip fabrication is rising as we speak with double patterning litho required below 20nm.
As for demand, may I remind you of Google Project Glass and other worn computing initiatives coming on as well as the trend to IoT/M2M.
Everything is getting sensed, instrumented, stored and analyzed. This will drive a new level of compute, storage, networking and bandwidth needs over the next 10-15 years as our current CMOS technology sputters.
A few weeks ago, I wrote a blog for the All-Programmable-Planet --APP-- community in which one of the main issues was how Moore's law has started running out of gas in the last years.
It includes some graphics that illustrate that a speed limit has already been reached by analyzing Intel's CPU performance evolution along the time.
If someone is interested, follow the next link:
Agree with Henry. all these times academics who said it is over, now the real people who are doing the job.
Still has 10 years. Do not underestimate our younger generation..they are smarter than us and they will come up with something, perhaps not simple CMOS..
Slow down your drinking check your designated driver (hope he/she is sobber) and cheers/salute for CMOS for all the years of work horse (at least kept me going on my entire carrier).
If you look at the design rule specifications Taiwan Semiconductor is releasing and estimated 20 and 16 wafer prices, the Moore's law slow down is in full steam.
Progress will still happen. Just not by moving to designs to 20 and 16.
What is interesting Broadcom CEO must have similar numbers I have seen so this is not some academic. It is real data on Moore's law.
Intel also has a cost problem. They just don't know it since they sell 100 to 1000 CPUs. Intel has never successfully competed in its 50 year history on cost in a commodity market and is in for a rude awaking in mobile
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.