There is plenty of spin, and 'patent dance' sounding talk, but the core idea seems to be 'multi threading' or time slice/serialize of the logic. That can save routing and LUTs but not save config memory and it will ADD the config-mux logic cost, between slice storage, and power impact.
The real test will come down to the SW, and if it will do the 'slice and dice' needed on the code, and if that will be debug-able.
10 years ago there were reconfigurable FPGA's. Is there something fundamentally diffferent about these?
I also question the phrase (spin?)
"We don't ask customers to do anything fundamentally different than they've done before,"
Please define "Fundamentally Different". Just coding for configurabnility is fundamentally different.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.