Samueli said he has briefed customers that prices for leading edge chips will increase, starting with the 20 nm generation due to rising fabrication costs. Market watcher Gartner Inc. recently estimated the average 45,000 wafer/month fab could pay a premium of about $500 million per process node due to the need to use two or more lithographic exposures to etch finer lines.
Stacking chips into so-called 3-D ICs promises a one-time boost in their capabilities, “but it’s expensive,” said Samueli. Broadcom expects to use 3-D stacks to add a layer of silicon photonics interconnects to its high end switch chips, probably starting in 2015 or later, he said.
“We are talking with potential [3-D IC] partners, but we don’t have it all sorted out yet,” he said.
Another industry veteran and EE on a panel with Samueli took issue with the Broadcom exec’s predictions. “The real situation is we have 10-15 years visibility and beyond that we just don’t know how we will solve [the problems of CMOS scaling] yet,” said Dave House, chairman of switch maker Brocade and a veteran of 23 years at Intel.
At Intel, House interacted regularly with Intel co-founder Gordon Moore who articulated the theory that roughly every two years chip makers would be able to double the number of transistors on a CMOS chip.
“In the 1970s I started preaching Moore’s Law will solve all our problems, and Gordon stopped me and said, ‘Ten years out, I don’t think it can continue,’” House said. “Ten years later, Gordon said again, ‘I only see about ten years here.’
“It became a regular thing at Intel strategic meetings where Gordon would say beyond ten years I don’t see it continuing,” said House who is also an EE by training. “As time went on there was always enough money spent and smart scientists” to solve CMOS scaling issues, he said.
“It could be we will have a firm barrier [at 5 nm], but I wouldn’t bet on it [because] the consequences will be so severe” he added.
In conversation after the event, Bob Metcalfe, one of the original inventors of Ethernet and the keynoter of the event shared his thoughts with Samueli and others.
“One of the big things I learned today is Moore’s Law is related to the elasticity of bandwidth—it not only creates the machines that need more bandwidth, it also creates the machines that provide that bandwidth,” he told Samueli. “If you are right and Moore’s Law ends, so will this bandwidth elasticity,” Metcalfe said.
Ethernet co-inventor Bob Metcalfe chatted with (from left) Andy Bechtolsheim of Arista, Bethany Mayer of HP and Henry Samueli of Broadcom.
Let's say we can take Moores law only 15 years forwards. But in Less than 15 years of silicon we got from IBM chess playing machine, to IBM Watson.
In 15 years from today we might be able to do most of the important stuff we want to do with silicon at a Reasonable cost. Not all we want,, but most of it. If so, the hell with Moores law.
If the eight cores were actually EFFECTIVE then you'd see applications running at eight times native speed which DOES NOT HAPPEN because the software architecture that could support anything even CLOSE to that still hasn't been invented (we used to refer to the "von Neumann limit" to describe the problem)! What we REALLY have is highly effective marketing "hype" and a lot of folks who have no idea what's really going on. There's really no sense putting more cores on a substrate than you can take advantage of, hence I don't see the sense of a lot of "hand wringing" that we can't get 32 CPUs on a chip when we're still learning how to fully use 2.
To me, the fallacy is to constrain thinking in the context of applications you know or understand. We consume resources differently when they are 'free', now like memory, or storage, or bandwidth. I have faith that new uses will consume cheap bandwidth and gates. Build it and they will come is a corollary of Moore's Law.
I've been hearing gloom and doom projections about the demise of Moore's Law for a least a decade, and I'm reduced to yawning.
We are already seeing similar issues in another area: we appear to have reached practical limits on the clock speed at which CPUs can be run, so current development substitutes larger address space from 64 bit chips and multi-core chips with parallel processing to get increased performance.
The issues I see aren't technological, they're financial. Steadily shrinking process geometries require increasingly expensive facilities to make the components. We've been seeing steady consolidation in foundries, and an increasing move to "fabless" semi-conductor operations, because fabs have become so enormously expensive that very few outfits can afford to build them, and we're seeing increasing joint ventures to spread costs, even among those who can.
The fundamental question to me isn't "*Can* you do it?", but rather "Can you *afford* to do it?" There are all manner of things that are theoretically possible, but simply cost too much to do, and I think we are approaching that area here.
We have processors with eight cores because we hit thermal characteristics which could not be overcome. Since marketing could not longer tout greater hertz, the solution was to tout multiple cores.
An interesting side note is how it is now touted that some cores can be shut down to increase the speed of a single core.
Plenty of pundits have predicted the end of CMOS scaling before, but rarely veteran executives of well established chip vendors with deep technical understanding.
Hmmm. may I remind you of the dire predictions from all sorts of deep technical experts at IEDM in the late 80s and early 90s about the 1um wall? Something about DUV resists not being transparent and sensitive enough. And then the 0.25um wall because of diffraction. Somehow people managed to print 40nm with 193 light.
As Moore himself said no exponential is forever, but this one has managed to last a lot longer than anyone expected. The end is always 10 years away. Some day that prediction will come true and the person who made it will be declared a genius. In reality they will be one of many who made such predictions, but just got lucky on the timing.