Previous methods to improve memory design yield
the tight relation between memory performance and yield, there has
traditionally been poor support for memory designers to estimate yield
during design using traditional simulation. Only the largest companies,
with high volumes and cheap access to silicon, could afford to do rapid
silicon respins against different memory designs; the payoff of better
memories outweighed the cost and time of respins. But even this tactic
is breaking. Masks are more expensive, and increasing design iterations
are needed to handle the extreme process variations.
companies without the luxury of such “silicon-in-the-loop” memory
design, the simulation-based alternatives have been poor: Often big
foundries weren’t shipping sufficiently-accurate statistical MOS models,
and simulating the 1 billion or so Monte Carlo samples would take an
infeasible time. One option was to run FF/SS corners and pretend memory
circuits’ variation somehow followed that of digital circuits’ global
process variation. Another option was to use known-to-be-inaccurate
statistical models of variation (like Pelgrom’s), and departing from
Monte Carlo sampling into harder to trust and less scalable analysis
approaches like importance sampling.
Doing memory design was
like driving in a Canadian blizzard, where 10 feet ahead is a wall of
snow that you can’t see past. If you’re driving on smooth highway (lower
process variation), and you’re driving slow (compromised performance),
you can probably get away with it. But if you’re driving on a rocky
mountain road (higher process variation), and you want to stay fast (not
want to compromise performance) then will need better visibility.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.