TAIPEI — Nvidia CEO Jensen Huang has become the first head of a major semiconductor company to say what academics have been suggesting for some time: Moore’s Law is dead.
Moore’s Law, named after Intel cofounder Gordon Moore, reflects his observation in 1965 that transistors were shrinking so fast that every year twice as many could fit onto the same surface of a semiconductor. In 1975, the pace shifted to a doubling every two years.
The enablers of an architectural advance every generation — increasing the size of pipelines, using superscalar tweaks and speculative execution — are among the techniques that are now lagging in the effort to keep pace with the expected 50 percent increase in transistor density each year, Huang told a gathering of reporters and analysts at the Computex show in Taipei.
“Microprocessors no longer scale at the level of performance they used to — the end of what you would call Moore’s Law,” Huang said. “Semiconductor physics prevents us from taking Dennard scaling any further.”
Nvidia CEO Jensen Huang notes the divergence between semiconductor technology and microprocessor performance at Taipei’s Computex show.
Dennard scaling, also known as MOSFET scaling, is based on a 1974 paper co-authored by Robert H. Dennard, after whom it is named. Originally formulated for MOSFETs, it states, roughly, that as transistors get smaller their power density stays constant, so that power use stays in proportion with area.
The diminishing returns from Moore’s Law and Dennard scaling have seen the semiconductor industry enter a mature stage in which just a handful of chipmakers can afford the multibillion dollar investments required to push the process technology forward. By now, only a few chip designers have the deep pockets to double down on fabricating silicon at the 16nm and 14nm nodes, design rules where the distinction has become increasingly blurred.
That stagnation in the progress of technology has also led to rapid industry consolidation in recent years that’s resulted in a flurry of multi-billion dollar mergers and acquisitions.
Nvidia’s Huang predicts further advances to come from GPU computing.
Even so, Huang suggested a modus vivendi for the semiconductor industry that plays into graphics processors, the products that Nvidia expects will enable continuing advances for years to come. Deep learning will use the processing power of GPUs that Nvidia makes as part of a new architecture that will take the company into artificial intelligence, outside the computer gaming business Nvidia has dominated, according to Huang.
The semiconductor industry is exploring a number of pathways beyond Moore’s Law. Some upstart Chinese chipmakers are taking a stake in Fully Depleted Silicon-On-Insulator FD-SOI. Others see a future in going beyond planar design to three-dimensional chips.
Nvidia’s bet on artificial intelligence to take the silicon industry forward is bullish, according to Randy Abrams, an analyst with Credit Suisse in Taipei.
Nvidia has highlighted its Volta GPU on 12nm at an 815mm die size, taking up the same surface area as 7 iPhone processors, and connected to 16GB of high bandwidth memory using Taiwan Semiconductor Manufacturing Co.’s (TSMC) silicon interposer technology. A configuration of eight of these chips in Nvidia’s DGX-1 deep learning / high performance computing machine sells for $149,000.
NVIDIA’s data center business has grown 186 percent annually to a $1.7 billion run-rate in the recent quarter, Abrams said. That chunk of business is worth $500 million to TSMC, or about 1.5 percent of its total revenue. AI will need more time to offset mobile as a major driver, Abrams said.
—Alan Patterson covers the semiconductor industry for EE Times. He is based in Taiwan.