In TSMC 28nm process and as process nodes scale, achieving target yields can be extremely challenging. Nowhere is this truer than for memory circuits, which aggressively adopt next bleeding-edge process nodes to help meet increasingly tighter performance specifications and higher levels of integration.
This article reviews the challenges raised by process variation, and in particular for memory with its high-sigma components. It then discusses an approach to address variation with accurate statistical MOS modeling, plus the ability to analyze billions of Monte Carlo samples in minutes. This solution is now in place and rapidly gaining adoption. Yield and performance risks with variation in advanced process nodes The core reason for poor yield in memory is due to advanced process variations. The chips that roll out of manufacturing do not perform to the ideal, nominal simulated versions in design. If they do not meet parametric yield, they can’t be used. Process variation comes in many forms such as random dopant fluctuations, variations in gate oxide thickness, line edge and roughness. But their effect is the same: these random physical variations translate to variations in electrical device performances such as threshold voltage and transconductance. In turn, the device performance variations translate to variations in circuit performance such as power consumption, read current in a bitcell, or voltage offset in a sense amp. In turn, circuit performance variation means chip performance variation, causing yield loss.
The reason that variation is such an issue at 28nm and below is that the device sizes are getting within the same order of magnitude as the size of atoms themselves. We used to have Avogadro-size counts for the number of atoms in a device; but now those counts are in the thousands. The oxide layer on gates is down to just a few atoms thick, so even one or a few atoms out of place can cause performance variation of 20% to 50% or more.
Memory market drivers There is extremely strong market pressure for memories with higher capacity and higher speed without taking more power or volume. Memory scaling has been so aggressive and successful that new applications have emerged and consumer expectations are high. The ideal smartphones and tablets would be able to store an entire library of movies, and have giant caches for mobile web browsing and navigation.
Consumers are replacing “old school” hard drives in laptops and ultrabooks with memory-based versions for the improved speed, weight, power consumption, and reliability; but they still want the capacity of hard drives. Server farms are also moving from hard drives to memory to reduce power consumption, but need the capacity to justify the move. Microprocessors are getting more levels of cache, giving better tradeoffs between memory size and memory access time, which in turn improves the microprocessor performance. The consumer and business worlds have insatiable appetites for memory with higher density, higher speed, and lower power.
To meet these extreme performance aims, memory chipmakers must adopt bleeding-edge process nodes. Smaller transistor sizes directly lead to increased bit density. In fact, aggressive memory design is a key driver of the newest process node technologies.
These extreme performance aims also mean engineers must design their memory systems with extremely tight margins, on the edge between performance and yield. For reasonable system-level yields, a memory bitcell can only fail extremely rarely – perhaps once in about 1 million to 1 in a billion times.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.