SUNNYVALE, Calif. The accelerating transition to multicore processors poses the next major challenge to systems developers, according to Justin Rattner, an Intel senior fellow and chief technology officer.
In a keynote presentation at this week's IEEE Hot Chips Conference at Stanford University, Rattner noted that designers must deal with complex memory hierarchies and sophisticated on-chip interconnect fabrics to ensure the cores are not data starved. At the same time, the processor must provide explicit thread support and deal with time-critical functions, as well as include fixed-function accelerators.
Rattner said there is no quantitative benchmark to measure multicore processor performance in areas such as data mining, recognition, synthesis, scalability, energy efficiency and still other areas. In one measure of performance versus scalability, he compared the Gaston and gSpan data mining algorithms and the system throughput versus the number of cores.
For a four-core system, the Gaston algorithm drivers provide five times the throughput of the gSpan algorithm, but slows as the number of cores increases beyond four. The gSpan algorithm is more scalable and provides higher performance than the Gaston algorithm as the number of cores increases.
But algorithms that can leverage the cores and hardware threading are only the starting point, noted Ratter. Improving the cache architecture of the system can also boost throughput by a factor two, he added. Tuning the instruction set enables designers to further improve throughput.
Some applications Rattner expects will be used to drive multicore benchmarks include recognition, mining and synthesis. He noted that Intel is now working with organizations such as Princeton University, the University of Pittsburgh, the University of California at Berkeley, and Stanford to create a public RMS suite that can be used to benchmark multicore systems.