Breaking News

Foundries Need Clear Benchmarks

Page 1 / 2 Next >
View Comments: Newest First | Oldest First | Threaded View
User Rank
Its been 17 years and counting..
srikhi   4/18/2017 2:18:06 PM

It has been seventeen years since the industry crossed the 100nm line and landed up in the 2-digit nm nodes.  We are now poised to leave that 2-digit era and enter a single-digit nm phase. During this time,  the industry has been fascinated by two questions: "When is Moore's Law going to hit the wall?" and "Who is ahead?"

These two questions have not been answered to anyone's satisfaction in all this time despite the millions of words devoted to this topic by thought leaders year after year and node after node.

These questions are not answerable. 

The industry will do well to stop asking (and pretending to answer) such questions. Instead of "When is Moore's Law going to hit the wall?", a better question is "What are the current and predicted challenges in scaling?". Instead of "who is ahead?", a better question is "Who are the strong players in different applications of the technology?". 

Here is one explanation of the deafening silence you talk about. There is a wide range of optimizations and tradeoffs in each technology offering. Variables are many and they are interdependent.  Design teams in fabless companies have honed in their methods for evaluating how good a technology offering is for their applications. They also negotiate with their foundry partners to optimize for their product application. The foundry gives them all the data they need to do such evaluation.  Why should/would the foundry or the fabless company share all that publicly? 


User Rank
Re: A list of key stats rather than a metric?
IanD   4/12/2017 5:18:41 AM
The problem with a single metric for performance and power is that both are a strong function of supply voltage as well as the chosen library (size and Vth). An example of this is that when foundries quote a PPA improvement from one node to the next, they often include a drop in nominal Vdd and a reduction in library size (number of tracks) -- is this really a "process" power saving or not?

If you're using the standard operating voltage and library then it is as far as the end user is concerned; if you're already using a shrunk library and lower voltage to optimise power then it isn't, and you won't get the headline power savings.

The only way round this would be for foundries to separate out PPA improvements due to raw process and operating voltage and libraries, but they don't want to do this, partly because it would make the raw process change (and higher price) look much less attractive and lead to awkward customers asking "Why can't I have the new smaller lower power libraries in the old cheaper process?" -- which is what happens in reality with the "improved" processes like 16FFC, 12FFC and so on.

Why do you think that so many of the publicity graphs are missing things like axis labels, and the PPA comparisons ("65% power saving!!!") don't mention operating voltage and library size changes?

The detailed data to do "fair" process/library comparisons exists and could be made available, but doing so simply isn't in the interests of either the foundries or their biggest customers -- a cynical view maybe, but the way the industry works...

User Rank
Pure media issue
sciing   4/12/2017 2:45:27 AM
As a designer you download the PDK and get all the information you need to benchmark the processes. This is a long, difficult story and will end not in a single number as media needs. Media were in past not even able to provide some basic metrics in the last year to give some hints for a fair comparasion. You cannot put SRAM sizes, CPP, Mx pitch, track height, simple design rules like forbidden SDB, preffered metall directions in a single number. This will always fail. After all keep in mind that there was no Litho progress since the 45nm technology. So every nm less pitch is paid by additional masks and processes. In the end only cost matters, a part which is completely uncovered by media.

rick merritt
User Rank
A list of key stats rather than a metric?
rick merritt   4/11/2017 5:44:37 PM
1 saves
Thanks for these thoughtful responses...and keep them coming!

Clearly no single benchmark will cover the waterfront.

What I wonder is:

1) Is there a suite of say three benchmarks, one each focused on giving a clear sense of a key care-about such as one for performance, one for power and one for area?

2) Is there a minimum list of key measureable features foundries could be reasonably expected to deliver so that potential users could plug into their own calculations to get a clear sense of the noide's relative strengths and weaknesses for their inytended design?



User Rank
apaDAV   4/11/2017 2:01:03 PM
Certainly the various pitches have no meaning as the processes are too different, therefore the comparison must be based on outcome.
Price per gate, price per bit, power vs clock at extrapolated full utilization, maximum clock rate at a specified utilisation and twice the (specified) maximum powerdensity.
And just for kicks and marketing I would like to see the ring oscillator / inverter chain frequency, even though it would melt the chip.

User Rank
Re: Metrics, schmetrics
IanD   4/11/2017 6:07:03 AM
The problem is that what matters to end users is not the raw process but what it delivers for their design, which is also driven by the cell libraries -- and both are strongly affected by what the target market is for which everything (process and libraries) has been optimised.

A headline-busting transistor (or gate) density reached by squashing MMP and especially CPP down to the minimum with a small-height library can mean gates that are difficult to connect to, have routing congestion problems, and high access resistance and parasitic capacitance which slows them down and increases dynamic power. But for lower-speed chips where die size/cost is the biggest driver, this works.

Where speed or power efficiency matters more than density/die cost it's better to have a taller less congested library, and not push CPP to the minimum to give lower access resistance and capacitance, even though the headline gate density is then lower -- this is the choice that Intel made with 14++. For higher-speed chips where speed/power matters more than die cost (higher margins) this works.

On top of this there is the effect of multiple Vth options, choice of supply voltage, 1D/2D routing strategy, MEOL strategy to help with transistor access, self-aligned or contacts over active gate to allow smaller usable CPP, number of metal layers -- all of which may improve performance and/or reduce die size but increase wafer cost...

In the end this makes it very difficult to say which is the "best" process (including libraries), especially if one is optimised for low-cost high-density (e.g. Samsung), one for high performance (e.g. Intel), and one for a good tradeoff of both (e.g. TSMC).

You could say that for "most advanced" process MMP*CPP is the basic process geometry benchmark which shows how difficult it will be to make -- which is fine until this is pushed too far too soon and leads to yield problems, which is what seems to have happened with Intel's 14nm process. In this case you might not want to be the guinea-pig who is first to use the "most advanced" process...

User Rank
Metrics, schmetrics
Left5   4/11/2017 5:38:58 AM
You can have all the fancy formulas that you want, but at the end of the day if the foundry decides not to share the required information, then we are back to using node names as before.

Intel made a big deal about their transistor density metric, but then doesn't give figures for node variants when it doesn't suit them. For their 14nm++ node gate pitch is 20% more than regular 14nm. Density is clearly effected, but Intel does not give transistor density for the 14nm++ node.

User Rank
realjjj   4/10/2017 1:51:15 PM
Since design costs are prohibitive on these nodes, it becomes an important knob for foundries.

Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed