Nvidia has a compute model to push. of course they'll claim it's better than the alternatives, but is there any reason we have to keep seeing these claims parotted in the media without support?
the GPU model is inherently quite data-parallel. it's ideal for certain kinds of computation, but clearly not all. what's the amount of independent computation a thread performs before it needs to interact with other threads, or to operate on general-purpose memory? if threads are basically lockstep and operating on their own state, you will be happy with GPUs. if not - if your dataflow is more complicated, if your computation is more conditional, if your problem is so large it needs many machines, then GPUs simply won't suit, and you'll use MPI. whether the MPI is on BG-Q, the K machine or Intel MIC doesn't matter that much.
there is no future in which GPUs totally win, since what defines them is their restrictive data/compute model. an interesting question is whether such a restrictive model is necessary to achieve power-efficiency (and MIC is nothing less than Intel's bet against that proposition.)
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.