DDC is certainly a lot more "plain vanilla" than FinFET, and is not aimng at the same market -- FinFET is aiming at the high-performance (where high-performance can mean low-power) higher-cost high-NRE market, DDC is aiming at the lower-cost lower-NRE market.
The problem with FinFET is that the NRE (design and mask) costs are very high and the cost per gate is the same or higher than 28nm bulk, so many products will just never go to FinFET, only ones with deep pockets where absolute lowest power or highest speed is worth paying for.
In terms of minimum operating voltage which is driven by device variation, both DDC and FinFET are better than bulk because the channel doping is much lower, but there is still some doping because of leakage from the deep implant used under the DDC channel and below the fin.
FDSOI variation is lower still because the channel is undoped and so should be able to run at even lower voltage and have even lower power, certainly for devices which can use more parallelism to optimise power running at a lower clock rate.
Operating the DDC technology at 0.425 volts was discussed in a paper from Fujitsu engineers at IEDM 2011.
That is for an SRAM block, which is less tolerant of low voltage than logic.
I think the recent benchmarking exercise was deliberately done at 0.9 and 1.2 volts (on the same 65 nm process as the earlier paper) so that comparisons could be made between conventional and DDC CMOS could be made more easily.
I suspect that if the comparisons were made at 0.6 volts they would favor DDC even more markedly.
What would really be interesting would be comparisons between DDC and FDSOI. Perhaps Globalfoundries and ARM could facilitate that. More likely they have already done, or are doing that work and are keeping the results to themselves for competitive advantage.
This seems to be based on reports from SuVolta. While the claims are interesting, I wondered: What is the reaction from others in the industry? Is DDC broadly perceived by more impartial observers as a significant advance?