Amid an unprecedented proliferation of process nodes, the industry needs good public benchmarks to compare semiconductor process technologies.
For some time now, foundries have named their latest process nodes based on their desired market positioning more than any transparent benchmark. It’s time the shenanigans stop.
Intel recently proposed a simple but somewhat self-serving density metric. The response from rival foundries was a deafening silence. I suspect that Intel has an edge in transistor density, something its competitors don’t want to admit.
Intel deserves praise for its recent decision to reveal metrics such as fin pitches and heights and minimum metal and gate pitches on its 10-nm node, which has not yet started production. These are the kinds of basic details that all foundries should supply when they first announce a new node.
However, such metrics and the transistor density that can be derived from them are only part of the story. If you can’t deliver transistors that support significantly higher speeds or lower power consumption, it doesn’t matter how many of them you can deliver.
Back in 2009, ARM’s chief technologist, Mike Muller, coined the term dark silicon. Engineers are packing more transistors on a die but lack the power budget to turn many of them on, he observed.
Intel’s density metric is good as far as it goes, but without related formulas for performance and power, it fails to tell the whole story. Over the last few years, the industry has embraced the idea of measuring a process node by PPA — performance, power, and area.
The consulting firm International Business Strategies (IBS) suggests using a metric of “effective gates … [that] takes into account available gates, [what Intel’s metric reports, as well as] gate utilization and yields,” said IBS chief executive, Handel Jones. “While available gates are useful … it is only the tip of the iceberg.”
The trouble is, “companies are very secretive regarding yields … but cost per gate … is the critical metric and is impacted by D0 as well as parametric and systemic yields” and how long it takes for customers to get chips, he added.
Several EE Times readers provided their thoughts on a metric in comments on our story on Intel’s proposal.
One reader called for a measure of RC time delay in nanoseconds per mm2 “if we use capacitance and resistance per unit length.” Another reader suggested that Intel’s metric is not useful because it does not include information on standard cell tracks.
Another reader said, “The real measure is how well the process plus the libraries work together across a range of design types [and] performance criteria. I would like to see Intel libraries and process vs. ARM/TSMC libraries [and process] compared on real synthesizable designs like a large ARM core complex/SoC and a large x86 core, targeted at the same clock rates.”
Several readers agreed that the final roof of the pudding arrives long after the node itself in the competitive stats and sales of chips made in the process. They also observed TSMC and Samsung are ramping chips in what they call 7-nm processes this year as Intel ramps what it calls its 10-nm node.
Analyst Linley Gwennap said that Intel’s density metric “must be combined with SRAM cell size to get a full picture of SoC size, particularly for processors that include large amounts of memory” and called on foundries to provide more data on their process nodes.
Next page: More options heighten the confusion
IBS suggests that chip designers need to consider a table of metrics.(Source: IBS)