Moore's Law came into being in the mid-1960s, courtesy of Intel Corp.'s co-founder, Gordon Moore. In effect, Moore's Law stated that every two years die geometries shrink to where the number of transistors available on a given die size doubles. A corollary is that clock speeds go faster and performance gains increase. During the advent of microprocessors in the late '60s and early '70s, followed by the arrival of the digital signal processor, chip designers had to work with an interesting design constraint. They could put only a small and limited number of gates on a die. Their task, therefore, was to develop architectures that used few gates efficiently, and this they did.
Over time, more and more gates became available to these traditional chip designers. Armed with the extra gates, chip designers enhanced their microprocessor and DSP designs with larger on-chip caches, more registers and wider data and instruction buses. At the same time, more complex instruction coding was used to reduce clock cycles down to one, and then to increase the number of simultaneous instructions being executed.
These chip engineers designed architectures (microprocessor, DSP and the like) that mated well with the limited number of gates available at the time. Since the outset of this architectural concept, however, there has been a perennial trade-off: System designers are continuing to use processor architectures that were originally designed to efficiently use limited amounts of silicon.
The basis of this penalty is that only a small percentage of the total gates in a chip at any given time are used to solve a problem. Generally only about 5 percent of the gates in a processor are actually used to solve a given task. The rest is overhead needed to keep the small number of "working" gates operational.