The first graph says as it's main heading "Cost Per Gate" but (and I read it clearly) the Y scale says cost per 100M gates which makes much more sense. It's an interesting challenge, I wonder if 20nm is the last stop for mainstream (based on current transistor technologies)?
Intel's Mark Bohr shows cost per transistor scaling for them upto 10nm. However, fabless vendors like nVidia and Broadcom have been complaining about cost per transistor scaling.
Its commonly known in the industry that TSMC is the only option for 16nm/20nm for most fabless companies (since it has 16nm/20nm yielding and has availability). If TSMC has no competition, it can charge whatever it wants... do you think that is a reason for the disconnect between cost per transistor scaling at TSMC and Intel?
Or do you think the cost difference is cos' of Intel scaling its BEOL between 22nm and 14nm, unlike TSMC?
Or do you think Intel is able to scale better and yield better due to more regular layouts (which use cheaper litho steps)?
From what I'm hearing, the lack of competition for TSMC is a major reason for cost per transistor not scaling for fabless companies. That problem won't be solved with FD-SOI.
A. The best path for the idustry to keep reducing cost from here on is adapt monolithic 3D just as the NV NAND vendors are. A list of the cost benifits associated in monolithic 3D could be found at: <http://www.monolithic3d.com/3d-ic-edge1.html>
B. While the cost per gate do not look good below 28nm it is getting far worst once we account for the embedded SRAM which barly scale, severly impacting the cost of SOC as was detailed in our recent blog: <http://www.eetimes.com/author.asp?section_id=36&doc_id=1321536>
C. Intel was suggested that ultra agressive scaling would provide the solution to the higher wafer costs associated with scaling. It dose not make much sense, and Intel continuing problems with yielding 14nm SOC add doubts.
Unlike foundries there is too much of unknown in respect to Intel to be able to truly response to some of the questions presented by 3D Guy. We did write a detailed blog on this issues: Intel vs. TSMC: An Update - <http://electroiq.com/blog/2014/01/intel-vs-tsmc-an-update/>
Zvi, when looking at the companies you're involed with - easic , zeno semi and 3d monlithic - they seems to complement each other very well to the point that maybe, chips with easic's model becoming similarly prices to cell based asic, while enjoying much reduced design costs.
Word on the street is:
GloFo yields are single digit % @ 20nm - that's why one of their major (and few customers) at 20nm walked away.
Samsung doesn't have capacity since Apple is consuming much of it. They take only big orders and wireless companies are hesitant to use them.
UMC has fallen behind.
Intel doesn't have enough IP blocks, service is bad, they're picky about whom they work with and they impose all kinds of layout restrictions on you.
There's no one else but TSMC at the leading edge. They can pretty much charge whatever they want... That's one of the main reasons fabless companies are complaining about cost per transistor not scaling at the leading edge.
IBS is familiar with Intel's projections and published their figure in our Jan 2013 monthly report. We are showing cost per gate rather than selling price. This means that profits gained by TSMC do not apply. Analysis indicates that TSMC is cost competitive with other wafer manufacturers, but TSMC wants to get good payback from their CAPEX and R&D investments, ie, gross profit margin in 2013 was 47%.
IBS has studied the Intel cost structure, and Intel uses the same wafer tooling as the other advanced wafer manufacturers. Equipment utilization varies between the wafer manufacturers, and the foundry vendors tend to be very efficient in equipment utilization. Depreciation accounts for 59% of the cost of a 16/14nm wafer, ie, comparable for the different manufacturers. Other costs include raw wafers, supplies, utilities, and labor. Differences in wafer costs are 10% to 15% maximum.
An area where there can be differences is in probe yield which is a combination of defect density, systemic, and parametric factors.
We do not see much difference in defect density yields between the different manufacturers.
There are differences in systemic yields based on process tolerances, overlay issues, and reticle characteristics.
Parametric yields are related to two main factors:
1) Physical design disciplines which include DFM factors
2) Product design and product performance distribution
Parametric yields are dependent on design disciplines and need a very close link between design and process. IDMs, such as Intel and Samsung have an advantage in this arena, but foundry and fabless companies can compensate if sufficient attention is given to the DFM factors.
Intel's projections were made before 22nm became high volume, and the analysis of selling prices of the Haswell product family indicate a large spread between chip cost at expected yields and selling price. Haswell's i3 processor family could be on the Intel cost curve, but it is unclear if the i7 quad core is also on the cost curve.
Also, Intel projects transistor cost which implies memory transistor as well as logic transistor cost. IBS data is for logic gates only.
Regarding 14nm, Intel did not achieve its high volume production target of Q4/2013 because yields were not at the expected level.
IBS hopes Intel will achieve its yield targets for 14nm in Q2/2014 because this will give the other wafer manufacturers a target to aim at.
Based on the analysis, done by IBS, on wafer costs and product yields, there are major challenges for Intel to reach its projected levels at 14nm and 10nm (for logic gates).
Intel is, however, likely to be 24 months ahead of others in ramping high volumes of products in 14nm FinFETs, which is another indicator of when FinFETs can be in high volume production.
Isn't the customer always right? If the fabless community were to push back on TSMC (or any other foundry, for that matter), they could do FD-SOI if they wanted and get up and running pretty quickly (my understanding is that electrical characterization 28 bulk -> 28 FDSOI is far less work for the foundry than a full node's work bulk-to-bulk). However, extrapolating from the data in Handel's report, we see that at 14/16nm the foundry's margins are about 40% higher on FinFET than they would be on FD-SOI, which could be a factor in the horse they've chosen...? Of course now we've just started hearing about Gen2 14nm FDSOI, which matches FinFETs for performance with savings (for the customer!!) of about 20%. Interesting times....
A lot of the cost predictions for FDSOI assume wafer cost of $500, which is the supposed cost for a FDSOI wafer when made at very high volumes. What was the cost used for Hendel Jones' analysis here?
With AMD gradually discontinuing SOI, SOITEC has a huge fab in Singapore and elsewhere running at low utilization. And one is nowhere close to volume. The cost of a FDSOI wafer will be higher than some of these studies indicate...
@Junko -- news about the Gen2 FDSOI was just posted by Soitec -- see Fig 2 in http://www.advancedsubstratenews.com/2014/03/fd-soi-back-to-basics-for-best-cost-energy-efficiency-and-performance/. It's done with source/drain engineering, and also shows pretty spectacular results at 28nm -- beating 20nm bulk perf for 50% lower cost. While I don't have all the details (and no doubt ST would be part of this) -- such developments would typically involve the whole Albany alliance: ST, IBM, Leti, Soitec, GF, Renesas, so it will be interesting to hear more details about it.
Quoting Mark Bohr is bad enough, but your "commonly known information is incorrect." Samsung has 14nm and they are ahead of TSMC, so to say TSMC has no competition at 14nm is wrong. And Samsung wafer pricing is very aggressive so expect transistor cost scaling to increase.
Which is what the fabless semiconductor ecosystem is all about, competition.
The main driving for cost is technology limitations so instead of only pushing the technology we should look for right desing and technology combination. Right now a designer has no visibility on whether a particular design is feasible in advance technology so rethinking and innovation is the step forward.
Regarding wafer prices for FinFETs, there is the need to get gross profit margin of 40% minimum (and better targets are 45% or 50%) to generate funding required for CAPEX and R&D in new generations of technology. The leading foundry vendors are able to get these gross profit margins.
With FinFET wafer cost of $4K and gross profit margin of 45%, wafer price is $7.27K. With gross profit margin of 40%, wafer price is $6.67K.
Competition is good, but wafer manufacturers need to make sufficient profits to be able to continue investing.
In the short term, there can be some discontinuity between wafer prices and cost, but longer term, there needs to be correlation. The most profitable and most price disciplined foundry vendor is TSMC. Samsung is also highly profitable and very disciplined on prices. We have to base price projections on real data.
A bulk Si wafer is ~$150 today, and a FDSOI wafer will realistically cost $750 or so in the first few years of the technology (if SOITEC solves the yield issues I hear about due to non-uniform Si thicknesses). That's $600 more... sure, maybe some of that cost could be reduced due to saved mask steps... Why should FDSOI be much cheaper than Finfets? Companies like Intel have already gone the bulk Finfet route and have gotten it to yield (finally!).
Not many companies are pushing FDSOI. There are clear advantages and disadvantages in both FinFET and FDSOI. But i think bigger companies are going the path of FinFET so that they get early learning on FinFET as FDSOI will hit wall after 14nm.
I am not sure what you mean by FDSOI hitting the wall after 14nm. But if you are referring to arguments made based on the channel thickness required for a given gate length, they are not technology drivers. For many generations gate length had nothing to do with the node name. 14nm FinFET is using a gate length in the range of 30-50 nm depending on the leakage requirement. FDSOI is using 25nm. Going forward to 10nm all you need is to add self aligned contact, a card Intel already played at 22nm, but foundry didn't yet. I agree the amount of learning on FinFET is much larger simply because much more money goes into that. But I have heard so far is not so encouraging news. Here is one example: 6 months ago, a reputable foundry claimed they did 4 test chip tape outs on 14nm FinFET with one of their partners, including a tiny arm core. When the audience asked multiple questions about power and area advantage over 20nm planar, the answer was " we did not optimize for power or area". I sincerely hope they had much better data ever since, but 4 tape outs for a tiny core - not full SOC- is more than enough.
If you build a plant for $10B (low end of the price range) to crank out one wafer per minute (40,000 per month) and assume flat depreciation over 4 years (by which time that node is trailing edge and a replacement already has been built) that is about 2M wafers, or about $5,000 per wafer. It is reasonable to assume that this is doubled by the investments in new design, masking, and other operational costs for each device running down that line.
The cost of the wafer is material but not dominant. Even the most expensive substrate is less than 10% of the manufacturing contribution to final cost. It is entirely possible that savings such as fewer masks, more familiar device characteristics, and simpler processing could make FDSOI actually cheaper for equivalent high end devices.
A real analysis of cost would include wafer cost, but does not stop there.
Regarding the cost of FD-SOI starting wafers: the real price for large volumes is obviously the result of a commercial discussion between the supplier and the buyer, which depends on many factors such as volume and perspectives, strategic agreements, etc. However Soitec has indicated that using 500USD/wafer in volume as a budgetary price makes complete sense (I work for them). Since Handel plans a follow-up, we should see soon the hypothesis he is taking and get indications on why he believes FD-SOI is cheaper than FinFET (I understand the inspection steps required are a key factor).
As A software person in the Embedded Space (talking about SOC based designs, not PC/Server setups), Option #4 has always been a problem for the past 30+ years of my career. The tools for the "embedded space" have ALWAYS lagged behind desktop/server markets. Partly due to how the industry evolved from many niche players providing custom toolchains (some home grown because they didn't exist anywhere else) and debuggers, be they a "real ICE" (plugged into the CPU socket) or JTAG based. In the past few years, smaller vendors have either gone away or been gobbled up. The OSS movement has brought the 900 pound gorilla to the table: Linux. Along came some rather spitty tools too: Eclipse. So things are getting better, but it's very difficult (time comsuming too) to keep up with thousands of variations available. Companies like Atmel, NXP, ST, EM, TI, they make all sorts of "flavors" of their M3/M4/M0 based devices, it's enough to drive one mad! Never mind the explosion of am335x, iMX6's, Vybrids, SAMA5's. Desktop/server world really only have to deal with 2 parts: AMD or Intel. . and both are pretty similar (from the outside that is). So yes, I agree, someone, somewhere needs to look at the state of the software driving all of these new fangled CPU's and all of these new fangled applications that are just over the horizon.
Which begs the Option #5: given the (seemingly) limited number of transistors available now (due to cost, assuming one sticks with the 28nm node because it's cheapest) - then the question is: What's the best use of those transistors? Huge L2/L3 caches? Multiple "big" CPU Cores? A mix of few big, few little (not talking ARM's big.LITTLE here, talking of say a Vybrid like setup with say 2 A5's and 4 M3/M4's. From a HW point of view, given the state of IP blocks, this could be relatively easy to do. But from a SW point of view, this is a whole different beast. . . How does one Architect this in SW? Are there Tools to support is?
Maybe we're at another "cusp" where once again, we are called upon to put on our engineers hats, tell the marketeers to go pound sand, and design (both HW and SW) the "next big thing"!!
Steve wrote: Along came some rather spitty [sic] tools too: Eclipse.
I've never used Eclipse myself, though from looking at demo screens my impression is that it's "like flying a jet airliner with 100,000 switches" as my favorite Comp Sci professor used to say.
Steve also wrote: Maybe we're at another "cusp" where once again, we are called upon to put on our engineers hats, tell the marketeers to go pound sand, and design (both HW and SW) the "next big thing"!!
For the last couple of decades, the common wisdom has been that it doesn't matter how bad your code is because hardware keeps getting faster and cheaper, and your time is more expensive than the hardware you'd save by writing efficient code. I, for one, welcome the idea that we may have to go back to writing code efficiently and designing applications to fit the available hardware (supplemented by efficient FPGA coprocessors) rather than lazily assuming Moore's Law will allow sloppy, bloated code.
But then, my programming habits were formed in the punched card days when you could "comfortably" carry 2000 lines of code (one box of cards) and it was impractical to carry around more than 4000 lines (two boxes of cards) due to your limited number of arms.
RE: Punch Cards - Here here!!! 2000/box, I guess then I was lucky I "only" spilled 152 on the lab floor one day :)
Similar "trend" also in networking: Not fast enough, here's a bigger pipe. . . And we just took your 150 byte packet and surrounded it in 400 bytes of "garbage". Yea, that works too. . . Know why ATM is still alive and kickin'? It took telco's 20 years to figure out how the darn thing worked, now that it does, they don't want to mess with is. Ashame that PDH (i.e. T1/E1) are going away quicker, one does not "need" 100Mbit to the home - Better managed/structured infrastructure will cure that, but then oh, who's going to pay for it? Not I says the consumer. . . it' should be "free" like my software. . .
Yea, quite a few things could be "rethought" to make things better rather than: Throw more bandwitdh, throw more CPU cores, thrown more memory/HDD/FLASH at the problem.
Intel is farthest ahead in implementation because for decades they were able to charge near to $1,000/processor. Each node cost exponentially more, but their customer base also grew exponentially and so could fund the next round of equipment.
Currently, the x86 market may have plateaued, while ARM cores cost 100x less than x86 chips. The question is whether or not we'll see enough profit in chip sales to do another exponential upgrade to all the fab equipment? Especially if TSMC is getting so much push back just from the cost of 20nm?
The GPU market could grow alot with virtual reality - it uncertain how much compute power will be required require to achieve high end virtual reality experience , and most likely there's a lot of growth in that market as shown by facebook ackuiring a VR company, oculus rift, for $2 billion.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.