I’d also like to see a paper describing a benchmark that can clearly compare the performance per watt of the various mobile graphics chips running the just as varied array of today’s mobile graphics APIs.
Imagination Tech holds sway in the market here, but ARM’s Mali is gaining traction. Nvidia and Qualcomm have strong positions as do a couple other merchant providers.
The architectures are so diverse experts warn you cannot just count cores to determine who is best. But it’s the APIs that really muddy the water here.
There’s OpenCL from Khronos, Cuda from Nvidia, AMP from Microsoft, River Trail from Intel, Aparapi from AMD, Renderscript from Google and OpenACC a variant of OpenMP. Too many ports in a storm for me!
Along a similar line, I’d love to see a paper that provides metrics for different approaches to dynamic voltage and frequency scaling. Intel CPUs have been doing this forever. Nvidia was first to show it in the ARM camp with its Tegra 3. More recently, ARM rolled out its big.little approach for doing it across multiple chips.
I’d love to see a paper compare the MHz per joule of Samsung Octa to Tegra 3 to say, Intel’s upcoming 22nm Bay Trail SoC with the Silvermont Atom core. That would indeed be a cutting edge analysis.
MIPS have been shipping 64-bit for years (multi-core etc), but why will ARM be successful when no silicon on the horizon for 2 years? AMCC is about a year later than initially promised. With a 64-bit SoC, power is important, but why would you need it for mobile? So much other stuff eating power, that the core might be less than 10%, so why choose ARM? Freescale has PowerPC 64-bit multi-core SoC, looking at ARM for A50. Too many architectural differences within ARM (and MIPS) camps. Architectural details still not readily available either.
I'm interested in Steve Furbers SpiNNaker neural simulation project. I guess there at the hardware ramp-up stage so havent yet done massive neural-net experimentation.
Its all blue-sky "who know where it might lead" stuff, but could be the first steps to "your AI buddy in your pocket".