When looking at improving the power or speed of a memory it is important to consider its access profile. If the target clock frequency is not high, then there are implementation options that can be used to reduce the memory’s power consumption. For example, if the reads and writes are localized in time to a certain region, then there are a lot of benefits to using address-based leakage reduction. By source biasing the core array of a large memory to just above the bitcell’s retention voltage, memory leakage can be dramatically reduced.
To avoid negatively affecting performance and area in smaller memories, the bitcells are source biased only when the memory is not being accessed. It is important to note that in a large memory, it can be effectively source biased without affecting memory performance, even when it is being accessed. For example, with an 8-Mb SRAM that is divided into four 2-Mb regions, three of the four regions can be placed into a low leakage state while the fourth segment is being accessed. When efficiently implemented, source biasing can reduce memory leakage by up to 60%, which translates to 45% lower memory leakage even when the memory is being functionally accessed. In order to keep the memory interface uncomplicated, it is important that memory designers keep the timing interface transparent to the SoC designer.
In addition to providing instantaneous power savings, large memory designs should also implement power gating to reduce energy dissipation. A very useful non-functional power mode is to shut down the power to the periphery and keep the bitcells powered to just above the retention voltage. Additionally, using a dual power rail with integrated level shifter enables even lower power dissipation. This, coupled with dynamic voltage and frequency scaling (DVFS), is a well-proven technique for reducing overall energy usage. The memory can drop the voltage on its periphery circuitry by 20% while maintaining the array voltage above Vmin. The benefits are much lower dynamic and leakage power.
Although some of the power reduction techniques above may result in slower access times and cycle times, there are memory design techniques that can enable higher system performance despite having slower access times.
Again, it is important to consider the way memory is accessed. Many applications access large memory in a very regular manner. In such a situation, the data can be written in a manner made for faster reads. Consider an 8-Mb SRAM divided into four virtual banks (see figure 3). Each of these four virtual banks can be accessed in rapid succession resulting in a four-word burst mode. System designers can benefit from this improved memory performance. If the effective clock frequency of a single virtual bank is 500-MHz, then a four-virtual-bank implementation can effectively support a 2-GHz clock frequency as long as the four accesses are to different virtual banks.
Click image to enlarge.
Figure 3: With an 8-Mb SRAM divided into four virtual banks, each bank can be accessed in rapid succession resulting in a four-word burst mode.
To ensure optimal manufacturing yield when building memories, it is very important to use good test and repair techniques. This is even more important when dealing with large-capacity memories. One has to consider both built-in self-test (BIST) and repair as mandatory. The choice of column redundancy versus row redundancy, and the amount of each, should be based on analysis of key data inputs. Defect densities and failure mechanism Pareto charts have to be considered when choosing redundancy options to ensure the optimal net good die per wafer.
With the increasing amount of memory and its impact on power, performance, and area in advanced-node SoCs, it is important to select the right approach for integrating large-capacity memory. SRAM can provide an important alternative to most eDRAM-based designs. Implemented using proper techniques, large-capacity SRAM can deliver compact, high-performance, low-power SoCs designed to succeed in today's competitive marketplace.
About the author
Prasad Saggurti is the product marketing manager for Embedded Memory IP at Synopsys. Prior to Synopsys, Prasad held senior engineering and marketing roles at MoSys, ARM, National Semiconductor and Sun Microsystems. Prasad has an MSEE from the University of Wisconsin-Madison and an MBA from the University of California-Berkeley.