So industry is already thinking about 7nm ? How far we can shrink the transistor ? I think we are approaching the limit and soon companies will start building new architecture's to enhance the performance instead of worry about shrinking the transistor.
This would explain why TSMC sticks with 16nm rather 14nm.
I believe back in July during ASML CC ASML stated that foundries face additional challenge compared to IDM due to design rules / (aggressive) shrinkfactor and the conclusion from what I recall was that foundries would need EUV @14nm.
I believe they also stated that DRAM people would be first in adopting EUV.
I am assuming NXE is the ASML immersion litho system.
At 20 nm Patton said folks need to start doing double patterning. Beyond that they are pulling out all the stops to minimize the need for triple and quad patterning.
NXE is EUV, NXT is 193 nm immmersion. I think it's a little early to spend much time thinking about 7 nm node today. But if we only have the currently expected EUV technology, we would have to use double patterning.
Fully depleted technologies like FDSOI and FinFET use undoped channels- so basically zero dopants, except for the source and drain. Of course there may be some doping of Fins for various reasons in the early implementations, but ultimately it will be undoped channels. And junctionless transistors have been demonstrated as well if you are worried about the S/D doping.
Oh and P.S. "everyone thought" 100nm was the limit at one time as well. I have yet to hear Intel, IBM, Samsung, TSMC or any IDM/Foundry say there is any hard limit in the foreseeable future. The limit will come from economics, not physics- it will just get too expensive to stay on Moore's law at some point.
I am wondering if Samsung/Gloflo "roadmap" was backed up by a manufacturing/litho roadmap.
I clicked on the slides - one said 14nm on track...
I would imagine being a customer you would want to now what process you actually use before taping it out?
ASML mentioned that foundry process would require EUV @14nm - I am curious why TSMC selected 16nm FinFet as next generation following 20nm planar.
"It’s not clear how much chip designers will demand the 14 nm FinFET process which carries significant costs and marginal advantages over a coming 20 nm node."
Shouldn't they figure this out?
I get impression they just follow blindly Intel roadmap.
The 20 nm node is the first to require double patterning, a cost adder. The 14 nm node is essentially a 20 nm process with FinFETs, another cost, said IBM’s Patton.
That comes from an "IBMer" - makes me scratch my head -
imagine you are a potential customer and you evaluate TSMC, Gloflo and Samsung.
which one would you pick?
Surprised the article didn't mention DSA prominently (directed self assembly)- they spent some time on that as well and that can reduce the need for double/triple/quadruple patterning as well. That is one area where IBM has made some real advances. But I don't recall seeing any litho roadmap slide- no.
And to their credit (contrary to your impression) they don't seem to be following blindly the Intel roadmap. K. Kuhn and M. Bohr have been touting tunnel FETs and Ge/III-V channels lately. IBM basically dismissed those solutions and said Si nanowire followed by carbon nanotubes is how they think it will go.
Fabless-foundry model is broken and TSMC is recently quoting the mask NRE of 16/14nm finfet over $10M. Even this goes down to below $10M over time, the NRE for 10nm or 7nm will be prohibitable for many except a few like Qualcomm, Apple, Samsung and maybe Nvidia, Broadcom and Marvell. The rest will be consolidated and disappear. Only analog/power semi guys will stay and digital is pretty much owned by a few.
Samsung will rule ARM world:
1.) they have their own fabs
they are foundry and IDM;
and they compete directly with their customers (Apple)
2.)they leverage NAND, LPDRAM and display cutting out the middleman
Apple gave Samsung business and they created a monster.
It's going to be a repeat when Samsung entered the DRAM market about 25 (?) years ago.
"Only analog/power semi guys will stay and digital is pretty much owned by a few."
IMHO that's exactly why TI quit mobile business
I would say that there are several more customers. People are willing to pay for advances in consumer and industrial electronics, but more products will become commodities with only software and firmware to provide differentiation. Many cell phone designs are already starting to look and feel similar to each other. I would not be surprised though to see people sneak in a special 28nm chip for example into their designs for some hardware customization for say a 8 inch or larger tablet form factor device.
Some of us have been saying for years that the pursuit of x-ray lithography, whether hard x-ray (~1nm) or soft x-ray (~13nm), a.k.a. EUV, was a supreme waste of millions of man-hours and billions of dollars. There were source/mask/resist issues 25 years ago, and there are source/mask/resist issues today. The pursuit of shiny penny alternatives continued, each of them with source, mask, and/or resist issues. The latest distraction is direct self-assembly. Good luck with that. Meanwhile, the choice was clear: shut down Moore's Law and its replenishable pot of gold, or extend optical. If optical was to be extended beyond what most folks thought possible, it would be essential to integrate design and manufacturing, which would result is the re-integration of the disaggregated semiconductor industry, and consequently, the ultimate supremacy of the old-fashioned IDM. Wonder if EUV/x-ray will be ready at 500 angstroms? Let's see: how large will the OPC features be. Or is that XPC?
Scalability of the planer bulk technology ends at 28nm because world major foundries; TSMC, Samsung, GlobalFoundries, and UMC all get on to Intel’s FinFET bandwagon after falling behind Intel. They all plan to introduce FDFinFETs at 14nm node in 2014, skipping the 22nm, at the same time of Intel’s 14nm introduction. The foundries schedule looks unrealistic, and planed aggressively not to behind falling too far behind Intel. It leaves IBM being the only major company adopting FDSOI scaled to 10nm. For 22nm FDSOI about 6nm SOI thickness is required to suppress transistor leakage current, while for 22nm FDFinFET the fin width as large as 22nm is required to suppress the transistor leakage current. In my opinion that is the main reason why Intel’s 22nm FinFETs are in high volume manufacturing today for more than a year, but 22nm FDSOI is not. For 14nm FDSOI about 4nm SOI is required while for 14nm FinFETs the fin width as large as 14nm is required to suppress transistor leakage currents. Thus, FDFinFETs show large advantages in manufacturability as transistor is scaled. Soitec can deliver today only the 28nm SOI wafers with 12nm SOI and 25nm buried oxide. Skim
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.