It would depend on the design, but 28nm FDSOI should be pretty comparable to 22nm bulk planar (maybe even better?) from a power vs. performance standpoint. Die size will be bigger, which normally means higher cost, but in this case that isn't so clear. Based on the releases so far the argument in favor of the 28nm FDSOI is that for medium-low TAM products 28nm cost is a sweet spot, there is minimal re-design/re-optimization cost to go to FDSOI, so that offsets the additional substrate cost. The process flow might be a little simpler with FDSOI than with 28nm bulk planar and certainly simpler than 22nm, further offsetting the cost. I haven't seen any indication yet that they will introduce additional body biasing techniques for this 28nm node, but they could and that would further reduce power for some products. Keep in mind Intel has high TAM products that require very high performance- FinFET/TriGate has an advantage there.
In principle it is possible, but it comes at the expense of less design flexibility. The gate and metal pitch at 28nm allows bidirectional poly and metal, whereas Intel's 22nm is unidirectional. A bidirectional M1 is almost equal to 2 layers of unidirectional metal for most designs.
Did Intel fabricate either bulk 22nm to proove FinFET was only 2-3% cost addrer? Did they run FDSOI to see it is 10% cost addrer? No, it was all powerpoint. Same as the famous chart that claimed 37% performance advantage coming from FinFET with no silicon data to back it up -- yes, they actually showed ring ocsilator data at VLSI to support that claim, but I am sure they wish they didn't.
Strange enough Intel (and TSMC) are thinking the opposite about FD-Soi.
On Intel 22nm the FinFet adoption only charge 2-3% of more costs, FD-Soi was not utilized because means a strong 10% charge over bulk.
The real story is that Samsung has not a good experience in processes for CPUs, GPUs or SOCs. Samsung has never developed something of exciting in this segment, it's processes for SOCs are licensed from Common Plataform (IBM mainly). in this moment Samsung is in crisis because IBM is out of the game and GloFo has not money to develop anything.
The more easy street to gain a bit of power reduction is to license (again) a process from another Company out of Common Plataform...
Samsung is late on 20nm bulk and likely is VERY late in FinFet, so an expensive FD-Soi could be an interim solution for it's SOCs. Too bad Samsung is losing the shrink and this will rise the costs even more. Too bad "money" is not enough to gain proof in silicon science, it needs "men" and their experience, Samsung has not them.
I can see only two companies able to gain a lot of momentum in silicon industry in the near future: Intel and TSMC.... all others have not the experience to face the upcoming very difficoult silicon nodes.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.