Let's say we can take Moores law only 15 years forwards. But in Less than 15 years of silicon we got from IBM chess playing machine, to IBM Watson.
In 15 years from today we might be able to do most of the important stuff we want to do with silicon at a Reasonable cost. Not all we want,, but most of it. If so, the hell with Moores law.
If the eight cores were actually EFFECTIVE then you'd see applications running at eight times native speed which DOES NOT HAPPEN because the software architecture that could support anything even CLOSE to that still hasn't been invented (we used to refer to the "von Neumann limit" to describe the problem)! What we REALLY have is highly effective marketing "hype" and a lot of folks who have no idea what's really going on. There's really no sense putting more cores on a substrate than you can take advantage of, hence I don't see the sense of a lot of "hand wringing" that we can't get 32 CPUs on a chip when we're still learning how to fully use 2.
To me, the fallacy is to constrain thinking in the context of applications you know or understand. We consume resources differently when they are 'free', now like memory, or storage, or bandwidth. I have faith that new uses will consume cheap bandwidth and gates. Build it and they will come is a corollary of Moore's Law.
I've been hearing gloom and doom projections about the demise of Moore's Law for a least a decade, and I'm reduced to yawning.
We are already seeing similar issues in another area: we appear to have reached practical limits on the clock speed at which CPUs can be run, so current development substitutes larger address space from 64 bit chips and multi-core chips with parallel processing to get increased performance.
The issues I see aren't technological, they're financial. Steadily shrinking process geometries require increasingly expensive facilities to make the components. We've been seeing steady consolidation in foundries, and an increasing move to "fabless" semi-conductor operations, because fabs have become so enormously expensive that very few outfits can afford to build them, and we're seeing increasing joint ventures to spread costs, even among those who can.
The fundamental question to me isn't "*Can* you do it?", but rather "Can you *afford* to do it?" There are all manner of things that are theoretically possible, but simply cost too much to do, and I think we are approaching that area here.
We have processors with eight cores because we hit thermal characteristics which could not be overcome. Since marketing could not longer tout greater hertz, the solution was to tout multiple cores.
An interesting side note is how it is now touted that some cores can be shut down to increase the speed of a single core.
Plenty of pundits have predicted the end of CMOS scaling before, but rarely veteran executives of well established chip vendors with deep technical understanding.
Hmmm. may I remind you of the dire predictions from all sorts of deep technical experts at IEDM in the late 80s and early 90s about the 1um wall? Something about DUV resists not being transparent and sensitive enough. And then the 0.25um wall because of diffraction. Somehow people managed to print 40nm with 193 light.
As Moore himself said no exponential is forever, but this one has managed to last a lot longer than anyone expected. The end is always 10 years away. Some day that prediction will come true and the person who made it will be declared a genius. In reality they will be one of many who made such predictions, but just got lucky on the timing.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.