I am not sure what you mean by FDSOI hitting the wall after 14nm. But if you are referring to arguments made based on the channel thickness required for a given gate length, they are not technology drivers. For many generations gate length had nothing to do with the node name. 14nm FinFET is using a gate length in the range of 30-50 nm depending on the leakage requirement. FDSOI is using 25nm. Going forward to 10nm all you need is to add self aligned contact, a card Intel already played at 22nm, but foundry didn't yet. I agree the amount of learning on FinFET is much larger simply because much more money goes into that. But I have heard so far is not so encouraging news. Here is one example: 6 months ago, a reputable foundry claimed they did 4 test chip tape outs on 14nm FinFET with one of their partners, including a tiny arm core. When the audience asked multiple questions about power and area advantage over 20nm planar, the answer was " we did not optimize for power or area". I sincerely hope they had much better data ever since, but 4 tape outs for a tiny core - not full SOC- is more than enough.
Not many companies are pushing FDSOI. There are clear advantages and disadvantages in both FinFET and FDSOI. But i think bigger companies are going the path of FinFET so that they get early learning on FinFET as FDSOI will hit wall after 14nm.
The main driving for cost is technology limitations so instead of only pushing the technology we should look for right desing and technology combination. Right now a designer has no visibility on whether a particular design is feasible in advance technology so rethinking and innovation is the step forward.
A bulk Si wafer is ~$150 today, and a FDSOI wafer will realistically cost $750 or so in the first few years of the technology (if SOITEC solves the yield issues I hear about due to non-uniform Si thicknesses). That's $600 more... sure, maybe some of that cost could be reduced due to saved mask steps... Why should FDSOI be much cheaper than Finfets? Companies like Intel have already gone the bulk Finfet route and have gotten it to yield (finally!).
The GPU market could grow alot with virtual reality - it uncertain how much compute power will be required require to achieve high end virtual reality experience , and most likely there's a lot of growth in that market as shown by facebook ackuiring a VR company, oculus rift, for $2 billion.
Quoting Mark Bohr is bad enough, but your "commonly known information is incorrect." Samsung has 14nm and they are ahead of TSMC, so to say TSMC has no competition at 14nm is wrong. And Samsung wafer pricing is very aggressive so expect transistor cost scaling to increase.
Which is what the fabless semiconductor ecosystem is all about, competition.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.