The demo only proves that Intel has a 14 nm process ready. It does not yet convince this process does not suffer significantly from the highlighted cost of increased amount of double patterning. We need the disclosure of how many layers require double patterning for example. It is likely more than for 22 nm. If not much more, then maybe it can be said to be advantageous for Intel.
My understanding is that SerDes is a tiny <1mm2 circuit (A 4-channel, 28Gb/s at 28nm, was 3.34mm2, see LSI's paper at ISSCC'14). While this is a good demo, it does not say anything about yield and performance of microprocessor chips that are two orders of magnitude larger.
I have not read the McKinsey report but according to this article: dropping below 20nm "requires updates in fabrication facilities that could cost more than $10 billion." But Intel said once that about 80-90% of 20nm equipment is usable at the 14nm node. Why is the cost in the McKinsey report so high?
Good to read that Intel is still pushing the boundaries for others to follow. The only concern is Intel's inability to break the ranks of the likes of ARM, Qualcomm etc in mobile and lower power devices. I think that the main reason behind all the problems is Intel's stubborness to share IP/platform with others. Intel must create an ecosystem for others to join and flourish.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.