Once upon a time in the CPU world, a fierce, intractable debate emerged between two camps of processor architects.
Once upon a time in the CPU world, a fierce, intractable debate emerged between two camps of processor architects. The issue was a simple division problem: Instructions per second equals clock frequency divided by average clocks per instruction.
One camp-led valiantly by the Alpha processor designers at the late, lamented Digital Equipment Corp.-insisted that the way to increase throughput was to keep the processor as simple as possible and increase the clock frequency. To that end, they build elegantly simple RISC engines with clock circuits that suspiciously resembled arc welders.
The other camp, including such powerhouses as Intel, argued that the way to increase performance was to decrease the average number of cycles per instruction. At first, that meant getting the ratio as close to one instruction per clock as possible. Later, as the argument matured, it meant aggressive superscalar designs that dispatched and retired multiple instructions per clock.
History records that the latter camp won. Digital Equipment is no more, and the Alpha is no more. High-end CPUs these days are virtually all multiple-dispatch monsters with tons of execution units, elaborate hardware for resource allocation, speculative execution and branch prediction. Even em-bedded CPUs, the stepchildren of processor architecture, are adopting these features, within the limitations imposed by strict power budgets and the fact that most of them have to be synthesizable.
But maybe the argument isn't over. Recently, Intel shot in the head its most advanced complex-CPU design efforts. The reason? Energy consumption. Reportedly, the company concluded it could go no further in driving both clock frequency and complexity continually higher, so it decided instead to pursue multiple, simpler processors on a die.
One problem was the dynamic power consumption for hugely complex CPUs. Another was timing closure on the many critical paths in the control logic for a vast superscalar core. In the presence of process variations and large swings in die temperature-due to that huge power consumption-Intel's task may have become impossible.
Another issue is leakage current on critical paths; it rises with the number of transistors, independent of activity. In the future, "fast" once again may mean "simple."
Ron Wilson covers microprocessors, programmable and reconfigurable logic, and the chip design process. He can be reached at firstname.lastname@example.org.