It's the tone of the announcement that could be misleading. They could have more simply announced reaching 40W, which is consistent with 30 WPH or 600 WPD. It seems the cost of running real resist wafers was too much, which is actually another big concern.
Got a citation for that? I'm curious why 10% would not be good (as the 6 or 7 sigma limit, not as sigma itself). Seems like it leaves plenty of margin to distinguish between exposed or non-exposed resist. If you have pointers to a discussion of the finer points and gotchas of a 10% spread, that would be much appreciated!
@resistion: 6 sigma has about 2 faults expected per billion. Allowing for empty space, an advanced SOC (2 to 4 sq cm) will have about 1e11 features at 20 nm. So, you would never make a correct chip at 6 sigma. 7 sigma is about 2.5 faults per trillion, so probably good enough for 20nm on a moderate to advanced chip. 8 sigma, at 1.25 faults per 10^15, I concede to be more than is needed.
Published claims that 6 sigma is good enough may be fuzzing the issue because the threshold between 6 and 7 is fuzzy. If you assume that the rest of the system is also a little overspec'd then the 2 values per billion which fall outside 6 sigma stand a good chance of being rescued by other parts of the system being usually well within their spec limits.
Your point about doubling power density as you halve the feature area makes some sense, but only once you reach the statistical limit. At 5,000, the 7 sigma limit is about 10%. Not difficult to design for that. At 1,000 the 7 sigma is about 21%. Tricky, but not impossible. At 250, the range is 43%, which is probably a practical limit. So, at 5,000 we are not yet bounded by shot noise.
However, there is another problem heading in the direction you point, and that is resist resolution. Just as in conventional photography, the finer details require a slower exposure. Going to 10nm is probably going to require a less sensitive resist in order to support the finer features. Not sure if this scales inverse linear, or better, or worse.
Also note that today's EUV is 13.4nm and it seems unlikely it will go below 14nm. It might make better 14nm than 193nm can do, with less restrictions on feasible geometry, but it is not clear if the optics can ever support sub-wavelength resolutions. So it will be back to the drawing board for a whole new generation of EUV at a more extreme wavelength (7?) if we want to keep going down in that technology. And that will double the photon energy, halving the number of photons. Probably the resists will be less sensitive.
IBM had been trying to use NXE3300B and it might be true that they got it going. It might be hard to debate it is genuine or not. Analysts: Forecasters: Astrolgers: Similar category, cannot take their words seriously or accurately. Missing technical details.
Dose has to double each successive node (like from 20 to 40 mJ/cm2 going from 28 nm to 20 nm), so power must double at same time. ASML unfortunately has flat levels in the roadmap, and it's still behind. Note that resist may also have to change each node to accomodate the different dose levels.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.