Recently Rick Merritt of EE Times reported on his interview with Mark Bohr, "Mr. Process Technology at Intel," and wrote: "It’s the beginning of the end for the fabless model according to Mark Bohr."
Quite naturally this caused many responses, with the majority of them hinting that Intel is trying to break into the smart mobile space by sowing doubt in the future of the existing ecosystem around TSMC-ARM and multiple fabless vendors.
We recently wrote two very relevant blog entries: Is NVIDIA in a Panic? If so, what about AMD? Other fabless companies? (04/02/2012); and Why Samsung will give Morris Chang sleepless nights (02/05/2012)
With recent reports about Qualcomm having issues with TSMC, Apple not being able to shift out from Samsung (their competitor) to TSMC, AMD having severe issues and trying to shift some of manufacturing from GlobalFoundries to TSMC, and straight out statements such as: "NVIDIA deeply unhappy with TSMC, claims 20nm essentially worthless," as was discussed in EE Times opinion: Required change in EDA vendors’ role and reward vs. scaling yield, one can't avoid the question: Are we facing a dramatic reversal of the trend from the Foundry model back to the IDM model?
It does seem that advanced scaling these days provides a significant advantage to the integrated model, where trade-offs between design, library EDA, and manufacturing, provide a better end product. Such an integration advantage manifests itself with respect to yield, now that the majority of the yield losses are design-related rather than random defects, and to manufacturing cost, as some of the layers needs double or even triple/quad patterning.
We at MonolithIC 3D Inc. are very pleased to see 3D ICs becoming a key business strategy, and truly believe that adding monolithic 3D manufacturing capabilities will extend foundries’ strategic benefits even further. Monolithic 3D, with its 10,000x better vertical connectivity, provides an exciting alternative to pure dimensional scaling. Moore's law is about doubling the number of transistors, which could be easily achieved using existing process and lithography by simply doubling the number of layers carrying transistors. Scaling through the third dimension provides power, speed, and cost benefits similar – or even better -- than we once used to get from dimensional scaling (see "Why Monolithic 3D" for more information).
In addition, monolithic 3D provides benefits that cannot be achieved with dimensional scaling such as pulling out embedded memory into another layer on top of the logic. In a typical SoC the embedded memory may represent 50% of the die area and include hundreds of memory macros, requiring too many vertical connections for TSV but is a very simple task for monolithic 3D integration.
A dedicated memory layer also allows optimizing the first layer for logic and the second layer for memory, which could be even a DRAM rather than SRAM, and would need fewer costly metal layers. Another advantage is the realization of logic-cone-level logic redundancy, as described in Monolithic 3D IC Could Increase Circuit Integration by 1,000x and in Redundancy & Repair with Monolithic 3D.
In summary, the current trend in the semiconductor industry indicates that IDMs have a significant advantage in the leading edge dimensional scaling race. Foundries recognize it and are responding by adding 3D capabilities. They could do even better by also adding monolithic 3D.
-- The author Zvi Or-Bach is president & CEO of MonolithIC 3D Inc. Opinions expressed here are solely those of the author.
If one wagered on EUV's introduction, one would be broke. Changing its name from SXPL (soft x-ray projection lithography) didn't change its wavelength. Optical will reign, surely in logic, and the seamless integration of design and manufacturing will be critical to success. IDMs will regain (retain?) supremacy. Memory will find cheaper alternatives. This has been obvious for a decade.
This sounds like a shameless advertisement.
The problem is that today's fabs are so outrageously expensive. There might be only 2 or 3 companies with the volumes and needs that they can afford to maintain leading edge fabs. Intel is uniquely one of those. Maybe a Samsung.
The other problem is complexity - complexity of circuits, software, systems, test, process... The experience to get all that right to have a high yield of chips come off the line (all the way to packaged goods - perhaps multi-chip) is extraordinary and rare. Just because you can throw the Xistors down doesn't mean they work nicely.
I think Intel is a company that has that level of experience, from system-level design all the way to their tiny transistor processes.
IDM would certainly be the preferred way to go, but most companies must rely on available off-the-shelf processes, tools, and s/w. I think Intel indeed has a unique advantage in this area, and they're working it.
There are many factors that can be exercised to keep most chips on a Moore's Law progress curve, but I'd rather be an Intel IDM that can also take advantage of intimate process knowledge, 450mm, & 14nm, as well as the multi-core, 3-D, multi-chip modules... that everyone else rides on. If you're looking for performance, Intel continues to prove itself.
Most companies can no longer even dream of being an IDM; they've just got to do the best they can with licensed IP, foundries, and off-the-shelf s/w. But they're doing well in lots of markets with lots of chips.
I don't think you will see a lot of companies building their own fabs now. The costs are too great for most companies and even the ones that do have the cash would rather spend it in other ways. The foundaries are here to stay.
Due to double and triple patterning without EUV, 14nm becomes expensive proposition for general foundry customers.
According to ASML multiple patterning is not even an option for the foundries at 14nm.
Reversal from the foundry model back to IDM ?
I think ASML comments addresse the real rootcause
When asked if the multipatterning issues at 14-nm applied to both integrated device manufacturers (IDMs) and foundries Meurice said: "At 14-nm foundries have a challenge that the IDMs would not have. The challenge is that they have to deliver design rules which are less restrictive and they have to deliver a shrink that is very aggressive." As such the decision to go to EUV for 14-nm concerns the foundry environment more than the microprocessor environment, Meurice said.
Technology should be available for prototyping from TSMC and Samsung by 2014(Unlikely from other foundries like Global). The issues are more of economics than technology availability. Due to double and triple patterning without EUV, 14nm becomes expensive proposition for general foundry customers. That's the benefit of Intel having captive fabs for CPUs. Intel can eat up the cost for better performance CPUs but general fabless can't unless EUV comes up in time.
I believe that Intel will be the only company to feature finFETS AND a 14 nm process in 2014. It will be interesting to see how long it will take the foundries to arrive at roughly the same process technology.
There is a lot of time between now and 2014. Intel may be first to the press about their plans for 2014, but I wouldnt count TSMC or Samsung out (I might count Global out...) just because they are not running to publicize what technology they may be using in 2 years. actually, in terms of business model, I think it will be more important for the foundries to push 450mm instead of sub 20nm geometry anyway. interesting times ahead.
Intel is at present one full node ahead of fundries - for the first time ever.
In 2014 Intel will be in volume production at 14nm. If this is indeed correct foundries will be two full nodes behind Intel.
Maybe the die costs are not any longer declining as fast as before from node to node. But power usage and speed certainly do. I am sure that ARM-based processor universe is very concerned - in smartphones and in micro-servers.
The only company that has proven to be able to fast follow Intel is another IDM - Samsung. Samsung will probably be happy to continue to be a "foundry" for a select mega-customers a la Apple.
And by 2016/17 we will see first 450mm production - in 10nm
My thanks to Zvi - always thoughtful and energetic doer
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.