Welcome to the era of "big data." We in EDA (electronic design automation) are starting to experience this trend, as Gary Smith suggested at this year's DAC, which we interpret to mean giga-scale design challenges. Mostly, these are due to increased design size and complexity, coupled with process variation-induced design and fabrication uncertainties with small device geometries at advanced nodes.
Giga-scale design challenges come in various forms. For example, designs are becoming larger -- it's not uncommon today to see multibillion transistor SoCs. The transistor count of logic processing chips increases by about 2x every two years, and the instance of on-die embedded SRAM is increasing rapidly.
With more complex designs, advanced SoCs are integrating different types of IP for more functionality. Multimode, multisupply operations place additional challenges on SoC design and verification. In addition, designers are facing significant challenges dealing with uncertainties induced by process variations and complications from double patterning.
Efficient and accurate simulation and verification tools able to handle giga-scale designs become essential as fabrication risk and cost become higher at smaller nodes. EDA tools that are able to solve these giga-scale simulation and verification challenges fit into the big-data category.
To meet the big-data criteria, tools must have a powerful database to be able to store and process huge amounts of data or information from the design process. As well, tools will need high-performance parallel or distributed computing capabilities to analyze the data and provide design improvement guidance for target applications.
For example, simulating a 100-million element SRAM circuit for a typical power or timing analysis can easily consume tens or hundreds of gigabytes of memory and take days to finish on a server with many CPU cores. Running full statistical analysis of such a circuit -- thousands or even hundreds of Monte Carlo samples -- would go beyond the capabilities of most current tools. This is also not practical with the available computing facilities at most companies, since almost all EDA tools today are running on local servers or private computer farms. Very few companies can afford to own super-computer farms for this purpose.
As opposed to running full statistical, full-chip analysis, compromises are made. Designers run selected process, voltage, and temperature (PVT) corner analysis for the bigger circuit blocks, and they run Monte Carlo analyses for smaller blocks with reduced samples. The computing power limitation can be easily removed if designers adopt EDA tools running on a public cloud where statistical circuit simulations can be run in parallel on thousands of CPUs; however, "EDA in the cloud" has yet to gain momentum.
Shrinking process technologies, such as FinFET, also complicate giga-scale design challenges as 16/14 nm design starts accelerate. FinFET causes slower simulation performance and a significant increase on circuit size as the 3D transistor structure comes with more parasitics. Furthermore, for advanced low-power mobile applications, a combination of low supply voltage and unavoidable process variations leads to smaller operational regions and smaller design margins. Furthermore, these advanced applications demand circuit simulation and verification tools to have high simulation accuracy. This is in addition to the need to handle giga-scale design capacity with high-simulation performance.
The demand for simulation accuracy for giga-scale designs has pushed the traditional FastSPICE circuit simulation tools to the corner, since FastSPICE sacrifices accuracy for simulation performance and capacity. Memory designers, for instance, may experience underestimated timing or power during memory characterization or verification with FastSPICE, and they may not have full confidence in the FastSPICE-simulated results.
One viable solution is to use emerging giga-scale SPICE simulators that have the ability to efficiently simulate giga-scale elements with SPICE-level accuracy. Specially designed database and matrix solvers are needed in giga-scale SPICE to handle the big-data of giga-scale simulations as efficiently as a FastSPICE simulator. High simulation accuracy and large capacity have created this demand. Giga-scale designs are moving from traditional FastSPICE applications to new giga-scale SPICE simulators.
If a circuit designer is performing very large-scale memory characterization or full chip memory verifications and cares about accuracy, the best possible advice is to use giga-scale SPICE instead of FastSPICE. Yes, FastSPICE applications are shrinking, and the call for pure SPICE tools is increasing as long as SPICE can improve on performance and capacity while maintaining accuracy. Unfortunately, most of the SPICE tools available today were developed long before this trend and are not able to address such big-data needs.
Another solution required to address the giga-scale design challenge comes from an adjacent tool category -- Design for Yield (DFY) -- offering accurate and efficient statistical yield analysis and optimization functions to trade-off yield and power, performance, and area (PPA). DFY tools are able to handle the big data generated from large statistical circuit simulations with hundreds to thousands of samples for regular Monte Carlo and giga-scale sampling for high sigma circuit design requirements. As a result, designers are offered better control of process and tradeoff performance and yield.
The era of big-data and giga-scale design challenges need not be daunting. EDA vendors are beginning to provide a suite of giga-scale simulation and DFY tools to help designers analyze the impact of giga-scale design challenges and optimize designs for better yield and performance. Memory designers are the first class of users to adopt these emerging tools, especially if they are designing memories with hundreds-of-million-plus elements. These designers are facing the challenges, experiencing the big-data trend, and discovering the value of giga-scale SPICE simulators and DFY tools. Why don’t you join them?
— Dr. Lianfeng Yang currently serves as the Vice President of Marketing at ProPlus Design Solutions Inc. Prior to co-founding ProPlus, he was a senior product engineer at Cadence Design Systems, leading the product engineering and technical support effort for the modeling product line in Asia. Dr. Yang has more than 40 publications and holds a PhD in electrical engineering from the University of Glasgow in the UK.