Much of this wishful thinking, hyped by commodity memory manufacturers, can be attributed to the old adage that "those who do not learn from history shall be compelled to re-live it." As a veteran of the semiconductor industry who can barely remember the years when commodity memory companies actually made money, I would like to dispel this notion. My argument is based on the business realities of known good die (KGD), which is at the crux of why MCMs integrating commodity memory is a bad idea.
The semiconductor industry can be broken down into a few categories. One is integrated device manufacturers, giants such as Intel, Samsung and IBM, among others, that create original chip designs and build these chips for shipment in systems they manufacture. You could argue that Intel doesn't actually build PCs but they do everything other than bend the metal for the enclosures. These giants have the deep pockets needed to build next-generation process technologies, the intellectual property that allows them to extract a profit from building silicon.
The other group of semiconductor companies comprises fabless chip companies: Qualcomm, Broadcom and former integrated device manufacturers who have gone fabless. All these companies extract a profit from their unique chip designs. These designs are manufactured by silicon foundries such as TSMC, Globalfoundries and UMC that provide the bleeding-edge process technology—their intellectual property that enables them to exact a profit.
I want to single out one more class of semiconductor company: the brave souls that build commodity memory, DRAM, SRAMs and flash. Their business model relies entirely on predicting supply and demand for their production and keeping up with pricing and capacity demands from computer and portable device manufacturers. In times of high demand, they extract profit and build reserves to see them through the times of high supply.
Now, let's examine the business of a KGD, best be described as silicon that's only "half-baked," as the KGD has only been tested at the wafer level. This means the chip maker knows if the die on the wafer is dead or alive. The more extensive at speed testing comes when the device is packaged. The KGD the supplier ships, which tells the customer die size and manufacturing cost, must also come with tests and methodology for testing the KGD in package, providing information that most chipmakers classify as trade secrets and are reluctant to share.
This doesn't apply to companies that fab apps processors and all the other components that might go into a multi-chip module, but you can count on one hand the members of this set at advanced process nodes. This brings up the second challenge that haunts multi-chip modules: the problem of sole source. System manufacturers who buy semiconductors want a second source to provide negotiating leverage on price. When a system manufacturer commits to a mult-chip module, he is surrendering his leverage over the chip manufacturer and only the largest of customers—the ones that could crush a supplier either legally or otherwise can afford to put themselves in this position.
Someone had better clue IBM to the hype factor. IBM Fellow Dr. Subramanian Iyer was showing cross sections of 32nm chips with 11 layers of metal, deep-trench capacitors for eDRAM, and TSVs at the recent GSA Silicon Summit. Mr. Hassan's article repeats all of the same arguments used more than 20 years ago to explain why surface-mount technology was doomed: can't rework the boards with a soldering iron, can't test the boards with through-hole testers, JTAG costs too much to add to chips, blah, blah, blah. That dismal prediction of failure seems to have been wrong.
There are too many technical advantages and too few disadvantages at this point for 3D IC assembly not to take off. Rather than labeling technical analyses as "hype" and "wishful thinking," how about a more fact-based argument to counter the technical advantages and the obvious, displayed progress by companies such as IBM and Xilinx?
The Raspberry Pi board uses a standard Broadcom BCM2835 SoC with POP (package-on-package)mounted DRAM. Nevertheless, every mobile phone handset out there already uses a 3D IC stack with wirebonding and has for years. So we're just talking a difference in interconnect here, as well as deciding who is responsible for and gets paid for a working 3D stack.
very good website: === http://www.simpletrading.org/
The website wholesale for many kinds of fashion shoes, like the nike, jordan, prada, also including the jeans, shirts, bags, hat and the decorations.
All the products are free shipping, and the the price is competitive, and also can accept the paypal payment., After the payment, can ship within short time.
We will give you a discount
WE ACCEPT PYAPAL PAYMENT
YOU MUST NOT MISS IT!!!
This is an interesting analysis, but I have to say its more than the memory guys waiving the 3-D IC flag these days.
It's big logic and fab folks like Altera, IBM, Qualcomm, TSMC, Xilinx and others. Are they all drinking the Kool-Aid?
"..But alas, today's module is tomorrow's much lower cost IC. ..".
So true in the past but the 3D proponents are just hoping that Moores's Law will finally grind down if not for device fundamentals ( leakage ) then at least for lithography ( EUV ) or just the min. order size needed to justify a $ 10 billon Fab
I see most of the KGD work happening at the IDMs and most of the 3d packaging happening in conjunction with the packaging houses. The package (and sometimes test) houses will pretty much follow whatever the IDM's push but most packaging houses that I've seen don't really have much advanced test technology. This is likely to change as some of the bigger boys (like AMD) go more fabless. And to address another point, there is speed testing done at wafer level, particularly by captive processor module companies who don't want to throw away expensive chips. But alas, today's module is tomorrow's much lower cost IC.