Strange enough Intel (and TSMC) are thinking the opposite about FD-Soi.
On Intel 22nm the FinFet adoption only charge 2-3% of more costs, FD-Soi was not utilized because means a strong 10% charge over bulk.
The real story is that Samsung has not a good experience in processes for CPUs, GPUs or SOCs. Samsung has never developed something of exciting in this segment, it's processes for SOCs are licensed from Common Plataform (IBM mainly). in this moment Samsung is in crisis because IBM is out of the game and GloFo has not money to develop anything.
The more easy street to gain a bit of power reduction is to license (again) a process from another Company out of Common Plataform...
Samsung is late on 20nm bulk and likely is VERY late in FinFet, so an expensive FD-Soi could be an interim solution for it's SOCs. Too bad Samsung is losing the shrink and this will rise the costs even more. Too bad "money" is not enough to gain proof in silicon science, it needs "men" and their experience, Samsung has not them.
I can see only two companies able to gain a lot of momentum in silicon industry in the near future: Intel and TSMC.... all others have not the experience to face the upcoming very difficoult silicon nodes.
Did Intel fabricate either bulk 22nm to proove FinFET was only 2-3% cost addrer? Did they run FDSOI to see it is 10% cost addrer? No, it was all powerpoint. Same as the famous chart that claimed 37% performance advantage coming from FinFET with no silicon data to back it up -- yes, they actually showed ring ocsilator data at VLSI to support that claim, but I am sure they wish they didn't.
This post is so biased it is difficult to believe.
Saying Samsung is unable to shrink silicon technology when they are leading DRAM and Flash scaling is a joke. Not to mention there lead in display technology. But you may not consider this as "silicon".
Playing the FD-SOI card has nothing to do with failing FinFET. It has specific attribute specially for SOCs and low power technology. And this is where the future of silicon technology will be.
Before considering the fail of FinFET integration within Samsung, just wait by the end of the year...
In principle it is possible, but it comes at the expense of less design flexibility. The gate and metal pitch at 28nm allows bidirectional poly and metal, whereas Intel's 22nm is unidirectional. A bidirectional M1 is almost equal to 2 layers of unidirectional metal for most designs.
It would depend on the design, but 28nm FDSOI should be pretty comparable to 22nm bulk planar (maybe even better?) from a power vs. performance standpoint. Die size will be bigger, which normally means higher cost, but in this case that isn't so clear. Based on the releases so far the argument in favor of the 28nm FDSOI is that for medium-low TAM products 28nm cost is a sweet spot, there is minimal re-design/re-optimization cost to go to FDSOI, so that offsets the additional substrate cost. The process flow might be a little simpler with FDSOI than with 28nm bulk planar and certainly simpler than 22nm, further offsetting the cost. I haven't seen any indication yet that they will introduce additional body biasing techniques for this 28nm node, but they could and that would further reduce power for some products. Keep in mind Intel has high TAM products that require very high performance- FinFET/TriGate has an advantage there.
@OtisTD: Actually, 28nm FDSOI comes with full body bias capability and it is one of its selling points. While people might consider body bias as an extra design burden, many companies - including Samsung in their 28nm bulk - already used static body bias. Dynamic body bias is a bit more involved but it has been done in the past (TI 45nm e.g).
When I said, I hadn't seen any announcement of "additional body biasing techniques" I meant in addition to what is already available for 28nm bulk. Are you saying that they are offering dynamic body biasing (and where did you see that)? Or just that static body biasing will continue to be offered- which was my intention/understanding. Sorry if I wasn't clear, but I didn't think it was necessary to go into detail. Basically from my understanding they will use the same masks as much as possible and directly port it over FDSOI without adding any additional performance knobs like dynamic biasing.
@OtisTD: Thanks for clarifying. The 28FDSOI is using the flip-well concept (n-well under NFET and p-well under PFET) for LVT devices. RVT is same as bulk design. This allows LVT devices to be forward biased to 1V or maybe more, which is not possible in bulk. If you already have a 28nm bulk design, for static body bias, you can probably just change the well masks, drop the Vt adjust masks, and add the No SOI mask (for diodes, etc).
As far as I know, ST has already implemented dynamic body bias. While it needs some redesign and proper system and software, it is not that different from DVFS to implement. Actually it is a bit simpler because wells do not draw as much current as Vdd does, so charge pumps are enough and routing is easier.
The cost for 100M gates of a product made with 14nm FinFET (including 16nm FF+) will range from $1.38 to $1.53 in Q4/2016.
28nm HPC cost per gate will be $0.97 for 100M gates (28nm fab partly depreciated).
For 28nm FD SOI (even allowing for the high cost of the substrate), the cost will be $0.92 for high volume manufacturer. Margins have to be added to get wafer prices from the foundry vendors.
For the high volume applications, cost is the most critical factor followed by power consumption.
The reality is that TSMC and Samsung are very close in their road maps for trying to ramp 16nm FF+ and 14nm FF. While process control is a key factor in bringing up FF products, another critical factor is DFM and impact on parametric yields. It is low parametric yields that have delayed the ramp-up of 14nm FF to date.
Cost per gate is a critical factor, and longer term cost and price do have a relationship.
Dear Sang Kim,
I am not sure what you mean by punch through. There is no leakage path other than the thin channel which is fully controlled by the top gate. I-V characteristics of FDSOI devices have been published in major conferences and there is no sign of degraded electrostatic as you claim. As far as the mobility degradation in thin SOI is concerned, mobility is already hit by high-k gate stack and yet every body is using it. As far as a device delivers the performance why should I care if mobility is higher or lower. Let numbers speak for themselves. We have shown 1.65 mA/um at 1V and 100nA/um for NFET which as far as I know is the highest DC performance ever reported. For PFET drive current is 1.4 mA/um which is again record high. And these are devices at pitch with all parasitic resistances of real technology. And unlike FinFET camp there is no cheating in drive current normalization. I do not want to brag about DC performance as there are many other factors determining circuit performance. But if you are concerned about DC performance please take a moment and review papers in the past few IEDM and VLSI.
Handel Jones says 28nm FD-SOI is an alternate option
for low leakage, high yields and high performance superior
to 28nm bulk technology. Consequently, Samsung
can support low leakage products with its 28nm FD-SOI.
look at the real issues with FD-SOI. My first question is why
28nm FD-SOI is still not manufactured today by major
semiconductor companies because 28nm bulk is manufactured
for several years by major semiconductor companies today
such as Intel, TSMC, Samsung and others.
In un-doped FD-SOI channel here, it is possible for drain depletion to extend with large Vdd(1V) to source without inversion. I call this effect punch-through. Therefore, punch-through failure can occur in un-doped FD-SOI. On the other hand, the drain induced barrier lowering or DIBL leakage current most likely occurs also in un-doped FD-SOI. In order to prevent such DIBL leakage current it is imperative to have an ultra thin SOI channel layer between source and drain so that the drain field can't easily penetrate the ultra thin SOI channel. How thin the ultra thin SOI thickness has to be in order to stop DIBL leakage current? It depends on the channel or gate length, Lg. For shorter Lg, a thinner SOI channel is required. This is the most critical issue for FD-SOI.
For 28nm FD-SOI a 7nm thin SOI channel thickness is required to stop DIBL leakage current. However, the transistor performance becomes significantly degraded due to the transistor mobility degradation because of scattering of charge carriers at the top gate oxide surface and at the bottom SOI surface in the 7nm thin SOI channel. As a result, even if 28nm FD-SOI were manufactured today, it wouldn't be superior to 28nm bulk in terms of transistor performance and manufacturing costs due to significantly higher SOI wafer costs. These are the major reasons why the 28nm FD-SOI is not manufactured today.
The other major issue with FD-SOI is its scalerbility. For
20/22nm FD-SOI a 4~5nm SOI channel thickness is required
to stop DIBL leakage current thus further degrading transistor
mobility. Furthermore, it is extremely difficult to control 4~5nm
SOI channel thickness uniformly and reliably across 12 inch
wafers in the manufacturing line. How thin SOI channel
thickness is required for 14nm FD-SOI technology? 3nm! It
At Vg=0, the channel is fully depleted, whether it is in a planar FDSOI or in a FinFET with reasonably low doping. Even in a bulk planar device the top 10-20nm is depleted. That doesn't mean a well-behaved device is in punchthrough wheter it being FDSOI/FinFET/or bulk planar. Your way of describing what seems to be physics is incorrect. I would recommend you consult a text book. Punchthrough happens when gate significantly loses control of the channel and high current folows independent of the gate voltage. This is certainly not the case in all the I-Vs that have been published for sub-30nm gate length FDSOI devices. Drain-induced barrier lowerin (DIBL) is of course inherent to any short channel devices and you CANNOT make it zero. In fact I will argue it does not makes sense to make it smaller than about 100mV/V either.
Your assumption of the gate length needed for a given technology is also incorrect. Gate length has nothing to do with the technology node (and it didn't in the past). At 28nm, FDSOI is using a gate length of 24nm, which is shorter than any alternative at the same node. At 14nm, gate length will be most likely 20-22 nm and so is at 10nm. All needed from gate length is that it fits the required gate pitch and the numbers I quoted above fit the bill perfectly.
Finally, the rule of thumb requirements of the channel thickness for a given gate length are just guidlines. Many other parameters such as gate stack, junction design and BOX thickness affect the electrostatic of the device. This is also the case in FinFET. No one needs 3nm SOI for 14nm FDSOI.
In a bulk planar device with super steep retrograde well, gate only needs to control the top portion of the substrate. Current flow is blocked at deeper locations by the well doping. Similarly in the planar FDSOI gate only needs to control current flow in the SOI layer, below that current is blocked by the BOX. You can imagine an ideal super steep retrograde well device as being to be equal to an FDSOI device with a BOX thickness of zero. Would you say such a device will suffer from pinch through?
No, the doping is not uniform in bulk planar! The well is retrograde (although not ideal) and there are halos. The whole point is that the well and halo doping will take care of leakage at the depth and gate takes care of it at the surface. I agree with you that the ideal supersteep retrograde will end up with high drain leakage, but that's not the case in FDSOI because drain is isolated from the substrate by the BOX.
BTW, your point about Vt being higher and more variable in a retrograde well is not correct either. In fact it's the other way around! Please see page 230 of Taur and Ning's text book. With retrograde well design Vt is lower than a uniformly doped well and in the extreme case independent of the well doping. This is in fact what SuVolta is promoting. Of course, with Vt being independent of the well doping you cannot use Vt adjust anymore and need to rely on body bias. What FDSOI does is simply making an ideal retrograde well possible and allowing the well doping to have either n+ or p+ polarity for either NFET or PFET witout fearing about drain leakage.
IBM's 22nm which is used for power8 is PDSOI, which is very similar to bulk planar in terms of scaling and in fact uses a gate length shorter than Intel's 22nm FinFET. Samsung and others made 20nm bulk planar and showed their results. ISDA's 20nm was shown in at VLSI 2012. TSMC is said to ship 20nm parts this year. The problem with 20nm was not scalability, it was cost. For your information foundry's 20nm uses 64nm metal pitch vs Intel's 80nm. Which means foundry is offering a denser technology, which of course comes at the cost of double patterning.
FDSOI products have already made by ST, see for example NovaThor demo in early 2013 that clearly showed SOC benefit. Samsung is now committed to offer 28FDSOI to the public.
I do not understand your repeated comment about 28nm bulk planar being in high volume for several years as a drawback of FDSOI. Yes, 28nm has been in production for several years, but it didn't come with all bells and whistles at the beginning. The first products used poly SiON gate stack and no strain element to keep cost down. Overtime several versions of the technology with different cost-performance trade offs were offered. They are put into volume manufacturing when fabless companies demand a certain performance and are willing to pay for that extra cost. 28FDSOI is no exception to this. Volume manufacturing was put on hold because customers did not demand.
BTW, Intel's 14nm FinFET is not in manufacturing yet and there has been multiple delays. And there is no such thing as "end of roadmap". Technology is scaled as long as it makes financially sense to do so. Whether it's being conventional scaling of the transistor, being stacking in 3D, or a completely new technology the same way BJT was replaced by MOSFET logic.
I am afraid you have mixed up many things. PDSOI has been in production at IBM down to 22nm. It has served IBM and other companies (AMD, Freescale, Sony, Nintendo, and Microsoft to name a few) for several generatios. So, unlike what you claim it is actually scalable. Even "the one time thing" long channel devices (180nm node) are being manufactured at a handful of foundries and are powering RF parts of nearly 50% of cell phones!
Samsung reported their 20nm bulk planar at IEDM 2011 (6 months before Intel's 22nm) and at smaller gate length, gate pitch (80nm vs 90nm) and metal pitch (64nm vs 80nm). Contrary to what you say, leakage was ok, down to 1nA/um for nominal gate length. TSMC also developed their 20nm node, and although they did not report device performance in public, customers like Qualcomm have already announced their product shipment plan. So, yes bulk planar is also scalable.
I cannot speak for ST or Samsung, but what I have seen in their announcement is that they are commited in offering 28FDSOI as a foundry service and that is happening even if you are not convinced.
With all respect, I would suggest that you through away anything you have heard about device physics, mobilit, etc and start afresh. There is no mobility degradation due to the prsence of back oxide interface. Quantum effects are not a monster to be afraid of. They are in play in any device and people have been accounting for those for many years. Those publications that reported mobility degradation in thin channel FDSOI only showed a modets 10-15% degradation in peak mobility down to any channel thickness of interest. Still those mobility numbers are almost 3X higher than typical numbers you get in the prsennce of high-k! So, the back interface is not a concern, certainly not at 5-7nm that is used in any FDSOI technology.
28FDSOI has already showed performance advantage at circuit level over 28nm bulk. Otherwise why would Samsung invest in it?
The "end of roadmap" and technology scaling has nothing to do with the gate length. A 3nm node does not have a 3nm gate length, the same way that Intel's 22nm has a gate length of 35nm. All you need for the gate length is that it fits the gate pitch. That's why practically it has been not scaled since 65nm node. At some point you need to start scaling the gate length but cerytainly no one in right mind would go less than about 15nm. After that there are several possibilities. One is monolithic stacking of 2 or more transistor layers. The other is to use vertical channel devices to decouple gate length from gate pitch. Both of these have been practiced in NAND flash and there is no reason they cannot be used in logic. Although logic does not enjoy the uniform layout that memory has. So there is no technology limitation that you want to solve with a FinFET that is "scalable to the end of roadmap". It all boils down to whether you can do any of these cost effectively. The major problem in advanced nodes is not the choice of transistor, it's how to make three contacts to each transistor. At 10nm you need 8 mask levels just to get from the transistor to M1 (which is another 3 masks to print). How does the choice of FinFET vs FDSOI affect this esclataing cost?
1) I agree, PDSOI shows floating body effect which results in history effect in circuits, but this is known for almost two decades and circuits designers know how to handle it. Design of multiple generations of IBM servers and AMD/Freescale/Sony, etc is a estimony that circuits with competetive performance can be made. We can sit here and talk about physics as long as we want, but when there is a chip that runs and delivers the performance, all the discussions about a single transistors I-V are moot. Same applies to the self-heating effect. It is known that when a transistor runs a DC current drive current is about 5% lower because of self-heating. But that condition almost never happens in real circuits except for a few analog transistors. Everything else has an activity factor of 1% or less and self-heating is not an issue.
2) FYI, I have attended IEDM, ISSCC, VLSI, etc and presented in all of them. I think I am well aware of what is being presented at these conferences somethimes well before the conference. For a platform technology, 20nm does not mean anything anymore. It's just a name and has nothing to do with the gate length. When Intel submitted their 22nm paper to VLSI'2012, the minimum gate length from TEM was said to be 30nm. At the conference they showed exact same TEM and called it a 26nm gate length. None of them of course have anything to do with the technology node. When I asked the author about the differences (submission vs presentation) he said one is the physical and one is electrical. His manager however said they had multiple versions of the technology and they just made it shorter. Don't take me wrong, I admire Intel's engineering team and know many of them personally. They did a great job putting the technology together, but that doesn't mean I will not speak up if I do not agree technocally with what they claim.
3) 28FDSOI is being manufactured in Crolles and will be in production in Samsung next year. Circuit level perfrmance have already been demonstrated and that's why it's put in Samsung. If it does not deliver higher performance than 28nm bulk or delivers same performance at a reduced cost, why would any foundry in their right mind want to run wafers? Whay would customers want to spend millions of dollars to design? Again, you and me can talk long about 5nm/7nm, 12" wafers, etc, but circuit designers don't care about any of those. A transistor is a 4 terminal device with a certain I-V and C-V charactersitics for them. At system level, even a single transistor I-V is not important. You care about range of Vt that is available, uniformity across chip and from chip to chip, and the circuit tricks you can play. Body biasing is the strength of FDSOI, that allows you to compensate for variations that you inevetibly have in in any process and adjust the performance for the work load. That option is not available in FinFET. So, yes you get a single transistor with probably higher current with FinFET, but what a circuit designer cares about is the whole device menu and not a single I-V.
4) For 28nm FDSOI the final channel thickness is 7nm, with the starting thickness of 12nm from SOITEC/SEH/SunEdison. The spec is +-5A across wafer and from wafer to wafer and as far as I know all three suppliers do better than that. For 14nm FDSOI (I personally don't agree with the naming, but it has the same gate pitch and metal pitch as 14/16nm FinFET) channel thickness is 6nm and same uniformity spec. You tell me that I cannot start with the same wafer and thermally oxidize 1nm of Si with perfect uniformity? FYI, that was the process used to form the gate oxide before high-k and still is one of the best controlled processes.
5) I think I have measured enough transistors in my life to know if 7nm is scary point or not. If you read publications on mobity in very thin Si, peak mobility drops from ~440 cm2/V.s to maybe 420 at 4nm Tsi. In the presence of high-k peak mobility is less than 200 cm2/V.s. Should the whole industry give up on high-k just because it degrades mobility? At the end of the day what matters is that whether a 7nm or 5nm channel thickness delivers a compeptitive drive current or not. And I think there has been enough publications in the past few years to prove it's doable.
I am a device engineer and for me a good device is a good device, no matter who builds it. FinFET has it's strengths and weakness. Same is bulk planar, FDSOI, or PDSOI. But when looking at a given technology I trust Si data (once confirmed and is consistent across many measurements). The industry does not go anywhere with handwaving arguements based on limited inofrmation and incorrect assumptions.
1) I do not disagree that the floating body effect affects the transistor characteristics. What I'm saying is that the effect is well known and high performance circuits have been built for over a decade. FYI, IBM's system-z is using SOI and those are bleeding edge performance. The latest was shown at ISSCC 2013, nearly 600mm2 large chip clocking at 5.5GHz. If you think that's not high performance, I have nothing more to say. Floating body effect, self-heat, and whatever "nasty" characteristics you would like to attribute to the PDSOI are there, and yet the whole circuit delivers the performance that is expected.
Scribe line transistors (and other test structures) are characterized to monitor the process. Nobody sells them! So I don't care what type of performance I get in those. As long as I know how they correlate with the overal circuit performance, they serve their purpose.
2) A platform technology is more than a single transistor I-V. It's about all pieces here and there that make it possible to put billions of transistors next to each other. IEDM is about all those pieces not an I-V (that in some cases was not consistnet with the rest of the charts!).
3) Technology name (22nm/28nm etc) is just a label. Look more closely at the papers you read and talks you attend, and you figure they have nothing to do with the gate length. And yes, Intel's gate length was 30nm for lowest Vt and 35nm for higher Vt device.
4) You can have doubts and TSMC has their own reasons. They choose to do FinFET for whatever reason and then ended up postponing it. It's not an easy path and the performance advantage that everyone is claiming is not easy to get. Their plan to do 16FF+ to get more performance is just an indication that 16FF was not competetive, despite what they thought at the beginning.
5) I stand by my earlier comment that at the system level, performnace is not about a single transistor I-V. It's about how many different transistors (SLVT, LVT, RVT, HVT, etc) you have and what kind of Vt range they cover. TSMC rightfully emphasize on this fact in their 28HPM paper. when looking at an Ion-Ioff characteristics this is the information one should be interested to see, the range of Ion and Ioff available and not neccessarily the on current at a given Ioff. A big circuit uses a mixture of transistors withe a range of Vt and it's alway good to have a wider range. Ironically, FinFET has a steeper Ion-Ioff characterstics than a planar device. This means that the abilty to crank up the performance by using a lower Vt device is reduced. Similarily, the ability to increase performance by increasing Vdd when needed is reduced and the ability to drop the leakage by using a longer gate length is reduced.
6) For the record, calling a bulk FinFET is incorrect. Fully depeleted is only meaningful when refering to SOI (as opposed to PDSOI) and does not bear any meaning about the thickness of the device, doped vs undoped channel, etc. Using it to refer to bulk FinFET is a misnomer!
7) In strong inversion, the thickness of the inversion layer is about 3nm in any Si device. That means for anything thicker than this you won't see the effect of the channel thickness. 14nm FDSOI is using 5nm channel thickness with Si for NFET and SiGe channel for PFET. Please read the VLSI'14 paper. The thickness uniformity is not an issue. You start from the same wafer used for 28FDSOI and just oxidize 2nm of Si. If you think thermal oxidation cannot be controlled withing 1A uniformity consult a gate module owner at any company.
8) Enough have said about the performance. NFET PDSOI delivered 1.65mA/um at 100nA/um off current at 1V back in 2012. That's absolute highest performance in any Si NFET. PFET is about 1.4mA/um, again among the highest I've seen.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.