Intel's Mark Bohr shows cost per transistor scaling for them upto 10nm. However, fabless vendors like nVidia and Broadcom have been complaining about cost per transistor scaling.
Its commonly known in the industry that TSMC is the only option for 16nm/20nm for most fabless companies (since it has 16nm/20nm yielding and has availability). If TSMC has no competition, it can charge whatever it wants... do you think that is a reason for the disconnect between cost per transistor scaling at TSMC and Intel?
Or do you think the cost difference is cos' of Intel scaling its BEOL between 22nm and 14nm, unlike TSMC?
Or do you think Intel is able to scale better and yield better due to more regular layouts (which use cheaper litho steps)?
From what I'm hearing, the lack of competition for TSMC is a major reason for cost per transistor not scaling for fabless companies. That problem won't be solved with FD-SOI.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.