This seems to be based on reports from SuVolta. While the claims are interesting, I wondered: What is the reaction from others in the industry? Is DDC broadly perceived by more impartial observers as a significant advance?
Operating the DDC technology at 0.425 volts was discussed in a paper from Fujitsu engineers at IEDM 2011.
That is for an SRAM block, which is less tolerant of low voltage than logic.
I think the recent benchmarking exercise was deliberately done at 0.9 and 1.2 volts (on the same 65 nm process as the earlier paper) so that comparisons could be made between conventional and DDC CMOS could be made more easily.
I suspect that if the comparisons were made at 0.6 volts they would favor DDC even more markedly.
What would really be interesting would be comparisons between DDC and FDSOI. Perhaps Globalfoundries and ARM could facilitate that. More likely they have already done, or are doing that work and are keeping the results to themselves for competitive advantage.
DDC is certainly a lot more "plain vanilla" than FinFET, and is not aimng at the same market -- FinFET is aiming at the high-performance (where high-performance can mean low-power) higher-cost high-NRE market, DDC is aiming at the lower-cost lower-NRE market.
The problem with FinFET is that the NRE (design and mask) costs are very high and the cost per gate is the same or higher than 28nm bulk, so many products will just never go to FinFET, only ones with deep pockets where absolute lowest power or highest speed is worth paying for.
In terms of minimum operating voltage which is driven by device variation, both DDC and FinFET are better than bulk because the channel doping is much lower, but there is still some doping because of leakage from the deep implant used under the DDC channel and below the fin.
FDSOI variation is lower still because the channel is undoped and so should be able to run at even lower voltage and have even lower power, certainly for devices which can use more parallelism to optimise power running at a lower clock rate.
It's possible that all three could coexist or that one (or more) could disappear, depending on market share and acceptance. It's clear that the three technologies have different advantages and disadvantages, especially regarding cost and power consumption.
The issue I see for DDC is that cost-driven applications won't adopt it unless it is cheaper than 28nm bulk, which it won't be unless it is widely adopted by big foundries who will drive the cost down -- and also unless the wide range of low-cost IPs needed for these applications is available, and these won't be developed unless the market volume justifies it. Developing another type of process (and the IP) just on the basis of lower cost is risky, all TSMC have to do is drop their wafer costs and give existing IP away more cheaply and the DDC cost advantage is gone, and all the low-cost applications stay with 28nm bulk.
For specific applications which can use its advantages over FinFET (even lower Vdd and power due to lower Vth variation and bulk biasing to tune Vth) FDSOI has a definite niche, at least for the next couple of generations down to 10nm -- and beyond this there's no clear path to mass production right now even for FinFET given EUV and ebeam delays, and the costs become stupidly high. Lower and more consistent power over process variation compared to FinFET (and maybe lower cost, depending on ecomomics) is a fundamental FDSOI (with bulk biasing) feature, and applications where power is paramount may go with FDSOI for this reason in spite of the ubiquity of FinFET.
FinFET is bound to continue, though it's not obvious that this is really the best choice for a lot of applications given the cost and design/production difficulties -- unfortunately the industry was bounced into adopting it by Intel and the "we must keep up" panic, forgetting that Intel can sell both their slow/low-power and fast/leaky/high-power CPUs for more than the typical ones, which isn't the case for most chips which just have to meet a specification. It is faster than FDSOI (great for CPUs) but also has more capacitance and process variation, which is not so good for low power -- meaning power efficiency during operation, not just low leakage for mobile.
It's going to be very interesting for the next few years... :-)
As you say, an impartial comparison would be great, but we might never see one because most foundries strongly favour one particular solution (their own!) and does comparisons to show they're much better than bulk CMOS (which everyone knows anyway), or maybe a (biased?) comparison to show they're better than one particular competitor in a particular usage case -- and they sure won't let really fair unbiased numbers out to anyone who's allowed to publish them :-(
So Intel and TSMC say how bulk FinFET is best, ST say how planar FDSOI is best, IBM say how FDSOI FinFET is best, UMC say how DDC is best. Global were saying how bulk FinFET is best when they launched their 14XM process, but now seem to be saying that 20nm FDSOI (now 14nm in marketing-speak) has the same performance but is cheaper, so either there are different points of view inside GF or they're starting to go cold on FinFET or both -- at least, this is the only direct comparison I can find of two technologies from within one company who say they will support both:
In fact since GF have also publicly said they plan to support DDC too, we might even get comparative figures for this as well -- you could say GF are being open about supporting all technologies because they don't have an axe to grind, or don't know which will win, or don't have enough customers in any one technology to bet the farm on that one only, or any other reason you can come up with based on smoke and mirrors.
Either way, I suspect we're more likely to get a true comparison from GF than the other foundries who are all pretty much firmly committed to one camp only.
(I don't work for GF or have any commercial link with them, but have been researching and benchmarking "next-generation" technologies for quite some time, especially looking at minimising total active power consumption in high-speed networking devices)
Agreed that SoC benchmarking is difficult -- but a simple comparison of some reasonable logic cell benchmark (e.g. delay/power for extracted EM-clean 9 grid NAND2, FO=4, 200 grid metal tracking, over process/temperature corners, with ASV i.e. not a lightly-loaded inverter ring oscillator under typical/nominal conditions!) would be a good start.
This would bring out into the open some things which are often brushed over, for example the fact that the often-quoted high drive current and small area of FinFETs comes with a big device capacitance penalty, especially with interconnect added, and even more so if the interconnect has to be beefed up to meet the more severe EM requirements (higher current per unit area) -- the net result of which is higher dynamic power, which is one of the things that everyone is trying to reduce...
Which might make DDC and FDSOI more attractive compared to the 800lb FinFET gorilla :-)
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.