Breaking News
View Comments: Newest First | Oldest First | Threaded View
<<   <   Page 3 / 5   >   >>
3D Guy
User Rank
Manager
Re: Intel's cost per transistor different
3D Guy   3/28/2014 6:46:44 PM
A lot of the cost predictions for FDSOI assume wafer cost of $500, which is the supposed cost for a FDSOI wafer when made at very high volumes. What was the cost used for Hendel Jones' analysis here? 

With AMD gradually discontinuing SOI, SOITEC has a huge fab in Singapore and elsewhere running at low utilization. And one is nowhere close to volume. The cost of a FDSOI wafer will be higher than some of these studies indicate... 

jeffreyrdiamond
User Rank
Rookie
Is there a market saturation issue?
jeffreyrdiamond   3/28/2014 3:16:52 PM
NO RATINGS
Intel is farthest ahead in implementation because for decades they were able to charge near to $1,000/processor.  Each node cost exponentially more, but their customer base also grew exponentially and so could fund the next round of equipment.

Currently, the x86 market may have plateaued, while ARM cores cost 100x less than x86 chips.  The question is whether or not we'll see enough profit in chip sales to do another exponential upgrade to all the fab equipment?  Especially if TSMC is getting so much push back just from the cost of 20nm?

 

junko.yoshida
User Rank
Blogger
Re: Intel's cost per transistor different
junko.yoshida   3/28/2014 2:57:50 PM
NO RATINGS
@Adele, great observation. So, where is Gen2 14-nm FDSOI happening? At ST?

HJ88
User Rank
Freelancer
Re: 20 is planar
HJ88   3/28/2014 1:48:13 PM
NO RATINGS
The reason 20nm is expensive (compared to 28nm) is:

- Use of double patterning which increases lithography costs

- Difficulties in controlling leakage which gives low parametric yields

Both of the above areas can be fixed, but will take time, ie, high volume in late 2014 or early to mid 2015. Apple is a strong supporter of 20nm.

Adele.Hars
User Rank
Rookie
Re: Intel's cost per transistor different
Adele.Hars   3/28/2014 12:00:57 PM
NO RATINGS
Isn't the customer always right? If the fabless community were to push back on TSMC (or any other foundry, for that matter), they could do FD-SOI if they wanted and get up and running pretty quickly (my understanding is that electrical characterization 28 bulk -> 28 FDSOI is far less work for the foundry than a full node's work bulk-to-bulk). However, extrapolating from the data in Handel's report, we see that at 14/16nm the foundry's margins are about 40% higher on FinFET than they would be on FD-SOI, which could be a factor in the horse they've chosen...? Of course now we've just started hearing about Gen2 14nm FDSOI, which matches FinFETs for performance with savings (for the customer!!) of about 20%. Interesting times.... 

HJ88
User Rank
Freelancer
Re: Intel's cost per transistor different
HJ88   3/28/2014 11:32:20 AM
NO RATINGS
@3D Guy

Good question.

IBS is familiar with Intel's projections and published their figure in our Jan 2013 monthly report. We are showing cost per gate rather than selling price. This means that profits gained by TSMC do not apply. Analysis indicates that TSMC is cost competitive with other wafer manufacturers, but TSMC wants to get good payback from their CAPEX and R&D investments, ie, gross profit margin in 2013 was 47%.

IBS has studied the Intel cost structure, and Intel uses the same wafer tooling as the other advanced wafer manufacturers. Equipment utilization varies between the wafer manufacturers, and the foundry vendors tend to be very efficient in equipment utilization. Depreciation accounts for 59% of the cost of a 16/14nm wafer, ie, comparable for the different manufacturers. Other costs include raw wafers, supplies, utilities, and labor. Differences in wafer costs are 10% to 15% maximum.

An area where there can be differences is in probe yield which is a combination of defect density, systemic, and parametric factors.

We do not see much difference in defect density yields between the different manufacturers.

There are differences in systemic yields based on process tolerances, overlay issues, and reticle characteristics.

Parametric yields are related to two main factors:

1) Physical design disciplines which include DFM factors

2) Product design and product performance distribution

Parametric yields are dependent on design disciplines and need a very close link between design and process. IDMs, such as Intel and Samsung have an advantage in this arena, but foundry and fabless companies can compensate if sufficient attention is given to the DFM factors.

Intel's projections were made before 22nm became high volume, and the analysis of selling prices of the Haswell product family indicate a large spread between chip cost at expected yields and selling price. Haswell's i3 processor family could be on the Intel cost curve, but it is unclear if the i7 quad core is also on the cost curve.

Also, Intel projects transistor cost which implies memory transistor as well as logic transistor cost. IBS data is for logic gates only.

Regarding 14nm, Intel did not achieve its high volume production target of Q4/2013 because yields were not at the expected level.

IBS hopes Intel will achieve its yield targets for 14nm in Q2/2014 because this will give the other wafer manufacturers a target to aim at.

Based on the analysis, done by IBS, on wafer costs and product yields, there are major challenges for Intel to reach its projected levels at 14nm and 10nm (for logic gates).

Intel is, however, likely to be 24 months ahead of others in ramping high volumes of products in 14nm FinFETs, which is another indicator of when FinFETs can be in high volume production.

EmbeddedSteve718
User Rank
Rookie
Re: Option #5
EmbeddedSteve718   3/28/2014 10:14:41 AM
NO RATINGS
RE: Punch Cards - Here here!!!  2000/box, I guess then I was lucky I "only" spilled 152 on the lab floor one day :) 

Similar "trend" also in networking:  Not fast enough, here's a bigger pipe. . .  And we just took your 150 byte packet and surrounded it in 400 bytes of "garbage".  Yea, that works too. . .  Know why ATM is still alive and kickin'?  It took telco's 20 years to figure out how the darn thing worked, now that it does, they don't want to mess with is.  Ashame that PDH (i.e. T1/E1) are going away quicker, one does not "need" 100Mbit to the home - Better managed/structured infrastructure will cure that, but then oh, who's going to pay for it?  Not I says the consumer. . . it' should be "free" like my software. . . 

Yea, quite a few things could be "rethought" to make things better rather than:  Throw more bandwitdh, throw more CPU cores, thrown more memory/HDD/FLASH at the problem. 

Slide Rule anyone?

betajet
User Rank
CEO
Re: Option #5
betajet   3/28/2014 9:48:58 AM
NO RATINGS
Steve wrote: Along came some rather spitty [sic] tools too: Eclipse.

I've never used Eclipse myself, though from looking at demo screens my impression is that it's "like flying a jet airliner with 100,000 switches" as my favorite Comp Sci professor used to say.

Steve also wrote: Maybe we're at another "cusp" where once again, we are called upon to put on our engineers hats, tell the marketeers to go pound sand, and design (both HW and SW) the "next big thing"!!

For the last couple of decades, the common wisdom has been that it doesn't matter how bad your code is because hardware keeps getting faster and cheaper, and your time is more expensive than the hardware you'd save by writing efficient code.  I, for one, welcome the idea that we may have to go back to writing code efficiently and designing applications to fit the available hardware (supplemented by efficient FPGA coprocessors) rather than lazily assuming Moore's Law will allow sloppy, bloated code.


But then, my programming habits were formed in the punched card days when you could "comfortably" carry 2000 lines of code (one box of cards) and it was impractical to carry around more than 4000 lines (two boxes of cards) due to your limited number of arms.

EmbeddedSteve718
User Rank
Rookie
Option #5
EmbeddedSteve718   3/28/2014 8:43:52 AM
NO RATINGS
As A software person in the Embedded Space (talking about SOC based designs, not PC/Server setups), Option #4 has always been a problem for the past 30+ years of my career.  The tools for the "embedded space" have ALWAYS lagged behind desktop/server markets.  Partly due to how the industry evolved from many niche players providing custom toolchains (some home grown because they didn't exist anywhere else) and debuggers, be they a "real ICE" (plugged into the CPU socket) or JTAG based.  In the past few years, smaller vendors have either gone away or been gobbled up.  The OSS movement has brought the 900 pound gorilla to the table:  Linux.  Along came some rather spitty tools too:  Eclipse.  So things are getting better, but it's very difficult (time comsuming too) to keep up with thousands of variations available.  Companies like Atmel, NXP, ST, EM, TI, they make all sorts of "flavors" of their M3/M4/M0 based devices, it's enough to drive one mad!  Never mind the explosion of am335x, iMX6's, Vybrids, SAMA5's.  Desktop/server world really only have to deal with 2 parts:  AMD or Intel. .  and both are pretty similar (from the outside that is).  So yes, I agree, someone, somewhere needs to look at the state of the software driving all of these new fangled CPU's and all of these new fangled applications that are just over the horizon. 

Which begs the Option #5:  given the (seemingly) limited number of transistors available now (due to cost, assuming one sticks with the 28nm node because it's cheapest) - then the question is:  What's the best use of those transistors?  Huge L2/L3 caches?  Multiple "big" CPU Cores?  A mix of few big, few little (not talking ARM's big.LITTLE here, talking of say a Vybrid like setup with say 2 A5's and 4 M3/M4's.  From a HW point of view, given the state of IP blocks, this could be relatively easy to do. But from a SW point of view, this is a whole different beast. . .  How does one Architect this in SW?  Are there Tools to support is? 

Maybe we're at another "cusp" where once again, we are called upon to put on our engineers hats, tell the marketeers to go pound sand, and design (both HW and SW) the "next big thing"!!

resistion
User Rank
CEO
Fins consume area
resistion   3/28/2014 1:10:53 AM
NO RATINGS
The fins are covered by the gate stack, that keeps the device width inflated. Definitely over 20 nm still.

<<   <   Page 3 / 5   >   >>
Most Recent Comments
Radio
NEXT UPCOMING BROADCAST
How to Cope with a Burpy Comet
October 17, 2pm EDT Friday
EE Times Editorial Director Karen Field interviews Andrea Accomazzo, Flight Director for the Rosetta Spacecraft.
August Cartoon Caption Winner!
August Cartoon Caption Winner!
"All the King's horses and all the KIng's men gave up on Humpty, so they handed the problem off to Engineering."
5 comments
Top Comments of the Week
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed