When ASICs first came out, many designs were quickly converted from PCB/TTL into single ASIC chips. These were really not complex nor large designs. As new silicon nodes were released, the doubling of gates quickly allowed large system on chip designs with cpu/dsp processors, lots of memory and control logic. But by the time SoC's were developed, all had grown accustomed to the ASIC development flow.
Many continue to talk about 3D Stacks which is a huge step function in business and technical complexities while it appears that many are overlooking the 2.5D/interposer as a very viable solution with significant benefits in area, power, performance, cost, etc. Buidling confidence and success stories with 2.5D solutions (like Xilinx, Global Foundries/Open Silicon/Amkor, etc) might be the fastest method to accelerate 3D TSV.
You are right! Intel is in the camp that wants to include memory management in their SoCs and processors, therefor prefer the HBM (high-bandwidth memory) concept.
I see both HMC (hybrid memory cube) supported by the 100+ member companies of the HMC Consortium, as well as HBM being used to add large amounts of memory close to processors, to break down the "Memory Wall" by significantly reducing latency and increasing bandwidth by a magnitude or more.
By the way, JEDEC just published the HBM spec in a very detailed document. You can review or download it from their website.
Deails about the HMC are available from the HMC Consortium.
Hi Junko, great to hear from you - it's MANY years since we discussed settop boxes at VLSI, together with Tim Vehling.
You are right: If the first impression is "too expensive", it can quickly turn off customers.
To answer your question "what can be done to lower cost, allow me to put COMPONENT COST into a larger context:
Component cost is rarely the only factor determinining the final decision of a technology user. Cost savings on the system-level and/or higher selling price of the system -- because of longer battery life, higher performance,lower cooling cost, smaller formfactor, lower development cost, shorter development time, etc --- can compensate in many applications the higher cost of component.
I also see RE-USABILITY becoming a decision criteria already, in favor of designs using interposers or 3D vertical stacking of die. As design teams work really had to migrate 28 nm SoCs to 16/14 nm FinFET process technology, they express serious worries about how much time and money it may take to migrate these designs to the next node, e.g. 10 nm!
Interposers and vertical die-stacking introduce us to DIE-level modularity and re-use. Just like the emergence of soft- and hard IP started to change SoC design at the end of the last millenium, this modularity will significantly impact SoC AND system design in this millenium, actually in this decade already.
And, in addition to these strategic considerations, equipment vendors, material suppliers, packaging- and test experts and others are continuing to work hard to reduce component input cost, increase throughput (units/hour) and manufactuting yields.
Let's not forget what we experienced with every new technology: Economies of scale further reduce unit cost.
There's a saying on Wall Street I think applies here;
"Those who know don't say, and those who say don't know."
TSV technology has been used for a while now, even down into commodity class consumer items. The answer to cost depends upon who you ask.
There's several high margin applications using DRAM that will begin the cost decline curve. Introduction of "cognitive computing intelligence and predictive analytics" machines is dawning - some say it is the next computer evolution - it requires novel and innovative use of memory to make responsive.
Slow motion development of 450mm and EUV is fanning the need for "micro-vertilization" of the volumetric foot-print.
Some manufacturers have learned how to stack devices, some have patents on using graphene heat spreaders. Like all things new, learning the trade will require time...,
Cost is an excuse; thermal is the key problem so we can't stack two hot processors. By stacking DRAM on heat-sinked logic, maybe the DRAMs just warm up a little. Otherwise, just 2.5D can be expensive, but that's not a stopper.
It is nice to learn technology is ready and looking for lower cost for manufacturing. Is tihs porper time to test and qualify its quality and reliability so that they can be employed in value added items with high reliability requirement.
Since there is evolutionary change in process technology, it will take time to prove its reliability.
In a mega-panel moderated by Matt Nowak, a 3D stacking expert at Qualcomm, more than a dozen experts discussed a few technical and many business challenges related to interposers. They concluded the technology is ready but we need lower costs.
But here's the thing. If the technolgy is too expensive, it doesn't strike me as technology being ready. What needs to be done to lower cost?
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.