Nvidia has a compute model to push. of course they'll claim it's better than the alternatives, but is there any reason we have to keep seeing these claims parotted in the media without support?
the GPU model is inherently quite data-parallel. it's ideal for certain kinds of computation, but clearly not all. what's the amount of independent computation a thread performs before it needs to interact with other threads, or to operate on general-purpose memory? if threads are basically lockstep and operating on their own state, you will be happy with GPUs. if not - if your dataflow is more complicated, if your computation is more conditional, if your problem is so large it needs many machines, then GPUs simply won't suit, and you'll use MPI. whether the MPI is on BG-Q, the K machine or Intel MIC doesn't matter that much.
there is no future in which GPUs totally win, since what defines them is their restrictive data/compute model. an interesting question is whether such a restrictive model is necessary to achieve power-efficiency (and MIC is nothing less than Intel's bet against that proposition.)
As we unveil EE Times’ 2015 Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. Panelists Dan Armbrust (investment firm Silicon Catalyst), Andrew Kau (venture capital firm Walden International), and Stan Boland (successful serial entrepreneur, former CEO of Neul, Icera) join in the live debate.