There are only 3 ways to perform a computation faster*:
1) faster technology/clock rate,
2) do independent parts in parallel and combine,
3) use a better algorithm.
Assume that you have a computation that has parts that can be done in parallel (e.g, the "map" of a map-reduce), and sequential parts that can't be done in parallel (e.g., the final "reduce" step of a map-reduce). let's say 1/10 of the total time in spent in the serial part. Even if you have an infinite number of processors available and can drive the parallel parts to zero time, you can't get more than a 10X speedup. You also need a way to speed up the sequential part.
Assume you have a program that *might* be parallelizable, but nobody has yet gotten around to doing so, or that it would be hard enough to do that the programmer time to do the work would not justify the computing time saved over the number of runs made. Even with infinite CPUs available, such a program still runs 1X as fast as on a uniprocessor.
A Power8 or Sun M6 CPU is supposed to run sequential code faster than competing CPUs can run it. Sometimes that's important enough to spend whatever it costs to get even a few percentage points of improvement over the next-best thing.
* Per Professor David Kuck, of the University of Illinois and recently Intel, a pioneer in parallel processing. He probably still offers a bounty for a valid 4th method that isn't a combination of the other 3.
it is becoming more interesting to watch end products like google glass...new gadgets, even though fundamental technologies are so important, end device is much more cool..Looking for cool devices conference.