The common wisdom these days says that the semiconductor industry is heading for the cliff. Some even say that we are like the cartoon character that went over the cliff and his legs are still moving, not realizing that there is no ground under him.
The increased in design cost, the growth in the number of IPs and the mask costs are combining to seemingly make any but the highest volume chips economically unfeasible.
The pundits are seeing a world with microprocessors and a few consumer SoCs as the remaining dinosaurs while all the other designs are heading towards a cul-de-sac with a small passage towards the FPGA world for the low volume price- and power-insensitive applications.
The trends are thus obvious and the outcome inevitable. Game Over.
Not so. The designers of that game have forgotten a few important parameters. The pressure to reach low-power solutions is now pervasive beyond the mobile applications.
The cost of the executing a given task in software on a generic processor architecture versus dedicated hardware has been proven to be two orders of magnitude off in cost, power and performance.
With all due respect to the embedded cores, they would have to run at impossible speed to swallow a 5 GPS stream, perform live video transcoding or aim a beam shaping antenna array.
Some would want to argue that multi-core is the solution, especially with dedicated cores for specific applications.
But the multi-core programming problem remains stubbornly elusive beyond the simple threading on an identical architecture. Nobody seems to be able to master the unbound complexity of heterogeneous processing engines with different characteristics communicating over ill-characterized busses and networks-on-chips.
At the same time, the number of software engineers continues to grow and the number of hardware engineers continues to shrink at least on a relative basis yet in most cases the software is given away by semiconductor companies as a necessary component of a platform but hardly as the valuable differentiator it is made to be.
While these disputes are happening, the FPGA world has been undergoing its own silent revolution. No longer simply seen as gobs of glue logic, the FPGAs have now emerged as an interesting alternative implementation for many applications with power, price and performance that enable them to make their way into consumer and even mobile applications.
Yet at the same time, they also emerge as a fascinating distributed compute fabric with a regular architecture of computational elements and memories. They suddenly represent a quasi-systolic array alternative to the Von Neumann digital processors with a much more attractive performance, cost and power tradeoff.
That is if one finds a way to program the beast and not try to do the equivalent of assembly coding, i.e. RTL-level design.