Increasing data throughput requirements continue to boost demand for more memory in highly integrated products. In applications ranging from DVD players to cellular phones and personal ID cards, designers are required to integrate greater numbers of larger embedded memory arrays to support diverse data and code storage needs.
Driven by these applications, embedded memory will account for about 70 percent of SoC content by 2005, according to Dataquest. As embedded memory's role expands in highly competitive markets, however, designers face growing pressure to achieve working-first silicon more quickly and with less effort. Caught between growing device complexity and continued price pressure, leading semiconductor companies are turning to enhanced verification methods to avoid costly respins and speed delivery of SoCs for memory-intensive applications.
In the past, memory designers could safely rely on the assumption that individual memory arrays faced similar operating conditions, permitting them to apply verification results from simulation of only one instance to all other instances. As memory usage absorbs an increased share of SoC area, however, individual memory blocks are becoming more diverse in size, organization and performance.
Advanced SoCs can comprise dozens, even hundreds, of memory arrays, each occupying a specific location and orientation with specific routing and power connections that place it in a unique dynamic operating environment. As a result, each instance in such a design needs to be characterized separately and accurately across a range of process, voltage, and temperature conditions to ensure correct performance.
At the same time, characterization and analysis has become decidedly more involved as the industry moves to nanometer technologies at 130-nm and below. At nanometer geometries, effects like capacitive coupling of densely routed interconnect dramatically impacts signal timing. As a result, semiconductor manufacturers find that designs that seem to pass sign-off using traditional analysis tools fail in silicon, dictating a need for more detailed circuit-level analysis to uncover nanometer-timing problems.
However, with today's large, complex design blocks, traditional circuit simulation tools have reached the limits of their speed and capacity, forcing designers to extrapolate overall design performance from a limited analysis of individual subnets or critical paths. In nanometer designs, however, cross-coupling interactions among different nets dramatically impact timing, and critical-path analysis methods prove inadequate in practice.
This leads to chip timing problems with reduced operating performance and even outright failure of the design. To compensate, engineers incorporate larger timing margins in their designs to try to guarantee functionality, resulting in expensive over-design, and delayed production and profit.
To address these growing verification challenges, leading design organizations are turning to more advanced methods including hierarchical verification, analog behavioral modeling and mixed-level co-simulation. Hierarchical simulation methods exploit the regular structure of memory arrays, verifying repeated cells just once but yielding instance-specific results - providing significantly faster run times with no loss in accuracy.
In a recent embedded memory design, verification runs that required 3-4 hours to complete for a flat representation of a key block needed only 40 minutes to complete using a hierarchical representation of the same block. With this rapid turnaround time, engineers were able to run simulations and edit their design several times within a single day rather than deal with more protracted, day-long edit cycles. Similarly, post-layout verification runs dropped from days to just hours, permitting practical analysis of post-layout crosstalk, IR voltage drop and ground-bounce effects despite the large amounts of parasitic data associated with these designs.
Increasing integration of digital and analog/mixed-signal circuitry requires even more sophisticated methods for analyzing performance of the complete design. Using analog behavioral models written in Verilog-A, designers can create test devices or checkers that monitor signal conditions such as frequency, timing and voltage level to ensure they stay within desired operating conditions.
If incorrect circuit behavior is detected, the simulation can be stopped and the event logged for later debug analysis. This early detection can save valuable verification resources by not continuing simulations that have erroneous results. By combining analog behavioral models with detailed circuit-level analysis, engineers can more easily identify potential design problems that would have been difficult or impossible to detect with conventional verification methods.
Digital co-simulation useful
Similarly, digital co-simulation methods have become more important as designers combine memory and digital logic in integrated designs. With co-simulation, a digital simulator analyzes the digital portion of a design while a circuit-level simulator provides more detailed analysis of selected portions. As a result, designers can simulate larger circuits more quickly than possible with pure circuit-level simulation methods. Of course, co-simulation trades off overall accuracy for overall speed, limiting circuit-level analysis to specific blocks of interest, while using more abstract digital representations to speed simulation of others. Nevertheless, this approach provides an effective alternative for very large designs that are dominated by digital logic but still require circuit-level accuracy for some blocks.
In a typical co-simulation flow in an embedded memory design, engineers can selectively simulate the decoder, memory core or the control logic at the transistor level, while simulating the remainder of the overall design as digital blocks in Verilog. In a recent SRAM design, for example, designers simulated the decoder at the circuit-level using Nassda's HSIM circuit simulator, while simulating the memory core and control logic in a Verilog simulator. In this case, designers simulated several thousand read and write operations on the complete circuit of about 1.7 million MOSFETs.
When this SRAM design was completely simulated at the circuit-level with HSIM, the simulation took 24.6 hours. In a co-simulation run, the design team used circuit-level HSIM simulation for only the decoder and simulated the remainder of the circuit in Verilog. In this case, the co-simulation of the complete design took 4.76 hours.
In providing results 5x faster than pure circuit-level simulation, co-simulation helps designers focus more quickly on potential problems in complex designs. Although co-simulation cannot match the overall accuracy of pure circuit-level simulation, it does provide full accuracy for the particular blocks of interest, such as the address decoder in the SRAM design example. Circuit-level effects such as IR voltage drop in the power network, for which nanometer designs are particularly sensitive, would still require full-chip post-layout analysis for final verification of the design.
As designers combine memory, digital logic and analog circuitry in more complex devices, advanced verification methods play an increasingly vital role in achieving early silicon success. Already in use at leading semiconductor companies, methods like hierarchical verification, analog behavioral modeling and digital co-simulation are becoming more accessible throughout the engineering community thanks to the growing availability of more sophisticated verification tools. Using these more advanced verification tools, engineers can identify the impact of nanometer effects on timing and accurately characterize the performance of diverse embedded memory blocks in larger integrated devices.