Agreed that SoC benchmarking is difficult -- but a simple comparison of some reasonable logic cell benchmark (e.g. delay/power for extracted EM-clean 9 grid NAND2, FO=4, 200 grid metal tracking, over process/temperature corners, with ASV i.e. not a lightly-loaded inverter ring oscillator under typical/nominal conditions!) would be a good start.
This would bring out into the open some things which are often brushed over, for example the fact that the often-quoted high drive current and small area of FinFETs comes with a big device capacitance penalty, especially with interconnect added, and even more so if the interconnect has to be beefed up to meet the more severe EM requirements (higher current per unit area) -- the net result of which is higher dynamic power, which is one of the things that everyone is trying to reduce...
Which might make DDC and FDSOI more attractive compared to the 800lb FinFET gorilla :-)
As you say, an impartial comparison would be great, but we might never see one because most foundries strongly favour one particular solution (their own!) and does comparisons to show they're much better than bulk CMOS (which everyone knows anyway), or maybe a (biased?) comparison to show they're better than one particular competitor in a particular usage case -- and they sure won't let really fair unbiased numbers out to anyone who's allowed to publish them :-(
So Intel and TSMC say how bulk FinFET is best, ST say how planar FDSOI is best, IBM say how FDSOI FinFET is best, UMC say how DDC is best. Global were saying how bulk FinFET is best when they launched their 14XM process, but now seem to be saying that 20nm FDSOI (now 14nm in marketing-speak) has the same performance but is cheaper, so either there are different points of view inside GF or they're starting to go cold on FinFET or both -- at least, this is the only direct comparison I can find of two technologies from within one company who say they will support both:
In fact since GF have also publicly said they plan to support DDC too, we might even get comparative figures for this as well -- you could say GF are being open about supporting all technologies because they don't have an axe to grind, or don't know which will win, or don't have enough customers in any one technology to bet the farm on that one only, or any other reason you can come up with based on smoke and mirrors.
Either way, I suspect we're more likely to get a true comparison from GF than the other foundries who are all pretty much firmly committed to one camp only.
(I don't work for GF or have any commercial link with them, but have been researching and benchmarking "next-generation" technologies for quite some time, especially looking at minimising total active power consumption in high-speed networking devices)
It's possible that all three could coexist or that one (or more) could disappear, depending on market share and acceptance. It's clear that the three technologies have different advantages and disadvantages, especially regarding cost and power consumption.
The issue I see for DDC is that cost-driven applications won't adopt it unless it is cheaper than 28nm bulk, which it won't be unless it is widely adopted by big foundries who will drive the cost down -- and also unless the wide range of low-cost IPs needed for these applications is available, and these won't be developed unless the market volume justifies it. Developing another type of process (and the IP) just on the basis of lower cost is risky, all TSMC have to do is drop their wafer costs and give existing IP away more cheaply and the DDC cost advantage is gone, and all the low-cost applications stay with 28nm bulk.
For specific applications which can use its advantages over FinFET (even lower Vdd and power due to lower Vth variation and bulk biasing to tune Vth) FDSOI has a definite niche, at least for the next couple of generations down to 10nm -- and beyond this there's no clear path to mass production right now even for FinFET given EUV and ebeam delays, and the costs become stupidly high. Lower and more consistent power over process variation compared to FinFET (and maybe lower cost, depending on ecomomics) is a fundamental FDSOI (with bulk biasing) feature, and applications where power is paramount may go with FDSOI for this reason in spite of the ubiquity of FinFET.
FinFET is bound to continue, though it's not obvious that this is really the best choice for a lot of applications given the cost and design/production difficulties -- unfortunately the industry was bounced into adopting it by Intel and the "we must keep up" panic, forgetting that Intel can sell both their slow/low-power and fast/leaky/high-power CPUs for more than the typical ones, which isn't the case for most chips which just have to meet a specification. It is faster than FDSOI (great for CPUs) but also has more capacitance and process variation, which is not so good for low power -- meaning power efficiency during operation, not just low leakage for mobile.
It's going to be very interesting for the next few years... :-)
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.