It is not a problem, not a problem at all, if Moore's law comes to a halt. We already waste most of the real estate on an IC whenever it is in operation. Less than 1% of the chip is switching at any given moment. Moore's law can end tomorrow, and we will not be in trouble if we find a way to switch twice as much of the real estate on the chip every year, by improving CMOS power efficiency and processor architecture. By switching more real estate per nanosecond, this continues the increase in computational performance of the chip. We can potentially do that for another 2 decades after Moore's law is exhausted, so there is always reason for optimism!
I agree that 'human imagination is not bound ...' But i think there is a fundimental lack of understanding of 1) how important ML has been to the entire world economy and 2)there is wishful thinking that something will replace it and things will continue on indefinitely. There is a lot of denial. I would be as bold to say there is not one 'new technology' or application that could have happened in the last 50 years without ML. that statement requires a lot of deep thought, but i believe you will find it's true.
On the dot, plus it should be considered that since 10 nm is already in the shot noise regime, the required power for given throughput and resolution will be inversely proportional to the wavelength. Between particle counting noise and wave diffraction limits, we already passed the sweet spot for single optical projection exposure.
Some of the predictions on this board seem overly dire to me. While the growth of vanilla CMOS ICs may slow somewhat, human imagination is not bound by Moore's law. There will be new technologies and new applications of existing technology to drive the economy.
The nature of Moores Law scaling is such that many of the proposed substitutes just don't cut it. Any incremental and most likely one time gains from 3d chips or carbon nanotubes or anything like that, pales in comparision to the gains in going from 180nm to 90nm to 45 nm etc.
Economicly this will be bad and we will go into a tech "dark ages" as there will be few new developments to spur investment and economic growth. Simply put why develop a new chip, if the new one cannot offer any more features? It ripples from there into vast swaths of the economy. We may get a short term bump in EE employment as the big players try to out design each other, but in the end the gains from doing that will by minimal. I think Intel knows this and that is way they are diversifying via there foundry ops to grap as much of the market share as possible when we get to the end.
let's remember our history a little better: ML is not solely responsible for getting us this far. lots of design progress has let us use those extra devices: 16-32-64, onchip caches, pipelining, OOO, even multicore (surely the least creative way to sop up the area/gates!) what's really changed is that devices/area isn't the main concern any more. faster is always better, but now power efficiency is the primary driver (not that it was ever far from the front!)
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.