Memory plays an essential role in the functioning of embedded systems. Indeed, embedded memories in system-on-chip (SoC) devices can account for 50% or more of chip area. Implemented using aggressive design rules, embedded memories tend to be more prone to manufacturing defects and field reliability problems than any other core on the chip. To improve yield and reliability in embedded devices, manufacturers need solutions that simplify fault detection, process improvement, and repair at the manufacturing level and in the field while minimizing cost and impact on functionality.
New systematic defects are often manifested as yield-limiting faults resulting from known factors such as reduced feature sizes. Additionally, the use of high-density integration and packaging technologies such as 3D-IC, which require complex manufacturing processes and have associated physical access limitations, further compounds the problem. These new and emerging challenges make it critical that embedded memory test and repair solutions keep up with technology advances in order to consistently provide superior test quality and yield optimization. This article will describe embedded memory test solutions, including fault detection in very deep submicron technologies, repair at the manufacturing level, and diagnosis for process improvement and field repair capabilities, that address today’s design yield and reliability needs.Keeping up complexity
Today’s demanding applications require SoCs that are bigger and faster that are more area, timing, and power sensitive than ever before, resulting in a shift from the logic-dominant chips of the past to memory-dominant ones. Figure 1 shows embedded memory projections from Semico Research Corporation. In 2008, embedded memories accounted for more than half of the die area in a typical SoC. It’s predicted that the amount of space they occupy on the die will continue to increase, reaching up to 70% by 2017.
Click image to enlarge
Figure 1: Embedded memories account for half the die area of a typical SoC today. Predictions are that this will increase to 70% of the die area by 2017.
Applications that require lots of memory are served by designs that embed large numbers of memory bits per chip, creating more powerful SoCs, but this has the associated problems of increased die size and poor yield. As design applications require more memory, it is essential to implement a comprehensive embedded memory test, repair, and diagnostic solution to help achieve high yield.
In addition to needing the capability to deal with increasing numbers of on-chip memory instances, the embedded memory test solution also must be able to handle aggressive design specifications, such as hierarchies, size, performance, area, and power consumption. As embedded test solutions often have a negative impact on performance, it is important that high-performance cores be built with carefully planned memory BIST MUX logic and an integrated memory test bus to minimize these effects. The bus consists of pipelining, latency, and setup for the memory signals of all the memory instances in the core and eliminates the need for a traditional BIST wrapper for each memory.
An embedded memory test solution will interact with this test bus and add the required logic to use the test bus interface for embedded memory test. Additionally, an effective embedded memory test solution will understand the functional pipelining with predetermined controllability and observability logic on the timing-sensitive datapaths, and it will have the flexibility to add the required number of pipeline stages on the memory BIST paths to meet the performance targets of the design. To further address design complexity, a test and repair solution that is integrated with embedded memories, with most of the timing-critical BIST wrapper logic hardened in the memories, makes it possible to achieve faster design closure while meeting the performance, power and area characteristics of the design.
An embedded memory test solution with a comprehensive set of test algorithms that are optimized to provide out-of-box fault coverage for embedded memories in each advanced node helps improve yield ramp-up time while minimizing the required test time. The embedded test solutions developed for 90-nm technology nodes will not deliver the same level of test quality for 28-nm technology nodes, because memory defects and failure mechanisms change as process technologies shrink. The following three-step flow can be used to test algorithms covering a wide range of faults associated with advanced technology nodes:
- Memory layout to electrical circuit extraction using memory scrambling information
- Electrical circuit to fault modeling extraction using SPICE simulations. A comprehensive set of faults can be injected on electrical circuits including the memory array, address decoder, sense amplifier, and write driver to validate the coverage of test algorithms.
- Fault modeling to test algorithm extraction using test algorithm generator tools that can generate minimal March test algorithms for detection of a given set of faults.
With the overall yield of an SoC being largely dependent on memory yield, it is important to implement techniques to improve it. Although the yield of native memory may be inadequate, embedded memory yield can be improved through the use of redundancy or spare elements.
In Figure 2, the purple lines represent memory yield as a function of the aggregate memory bit-count in an SoC. In this example, the yield for 24 Mbit of embedded memory is close to 20% for new processes, represented by the longer purple line, assuming chip dimensions of 12 cm x 12 cm with a memory defect density of 0.8 and a logic defect density of 0.4. Through redundancy, the yield can be improved, but determining the type and quantity of redundant elements needed for a given memory requires both memory design knowledge and failure history information for the process node under consideration. Yet, simply providing the right redundant elements is not sufficient. Both the ability to detect and locate the defects in the memory and an understanding of how to allocate the redundant elements require manufacturing knowledge of defect distributions.
In order to achieve the optimized yield solution represented by the orange line in Figure 2, test and repair algorithms that contain these capabilities must be utilized.
Click image to enlarge
Figure 2: Using redundant elements to improve yield of embedded memory
Conventional memory test algorithms detect memory failures to determine whether or not a chip is defective. For repairable memories, however, fault detection is not enough. Repairable memories need fault localization to determine which cells must be replaced. The greater the fault localization coverage, the higher the repair efficiency and therefore the obtained yield. It is very helpful to localize the exact coordinates of failed bits and associated fault classification to understand the root cause of failures.