Part 1 of this series, FMEA made automated and easy, demonstrated that physical prototyping alone is insufficient for determining all error conditions in the system.
(Ensuring that today’s complex automotive systems meet specific quality, robustness, lifespan, and safety goals requires validating that the system implementation matches the specification. An important step in verification is to gain an accurate understanding of the system’s worst-case behavior.
Assumptions are frequently made about the worst case instead of making precise calculations. It is typically not possible, however, to determine the worst case without the use of advanced simulation techniques. This is particularly true for the specification phase. Part 2 of this series describes these verification techniques using the Worst-Case Analysis (WCA) tool introduced in the 2009.12 version of the Synopsys Saber product line.)
The driving question—"What is the worst-case behavior of our implementation?” is a difficult one to answer. This question tends to gnaw at developers. Too often, the question is answered incorrectly, or not at all.
Here’s the problem: manufacturing tolerances and other external influences (e.g., temperature) subject an electrical circuit to fluctuations in behavior. A primary aim of worst-case analysis is to determine the maximum effect of these fluctuations. Before this can be determined, however, the developer must complete several steps. The first step is to identify which conditions affect system behavior. This step is often skipped—with assumptions in place of calculations—leading to incorrect results. Common false assumptions occur by excluding the peripheral areas of the tolerance range.
In setting up the variation analysis, the developer defines design paths (nets) with nodes that represent the parameter configurations to be simulated. The question is, have the correct nets been selected such that the worst case will be identified? This net selection effort requires a significant time investment, particularly with larger systems. Script-based approaches are faster, but provide unclear results since many loops must be coupled. A variation analysis is also likely to focus on the most primitive variant as the basis for finding the worst case—which can also lead to the wrong conclusion.
Corner case analysis
Corner-case analysis (a.k.a., boundary-condition or extreme-value analysis) has been in practical use for several decades and focuses on evaluating the extreme edges of the tolerance ranges affecting design parameters. While its simplicity and need for only two simulation runs make this analysis a popular approach, the assumption that the worst case will always appear at these tolerance extremes is undesirable when safety must be assured in a complex design—because the actual worst case may exist somewhere between the edge values. For these safety-critical systems, the time savings may be painfully undermined by an erroneous calculation of the worst case.
Root sum square analysis
The root-sum-square (RSS) analysis method is used to transform the statistically distributed tolerances of the defined design parameters into individual Gaussian (normal) distribution curves to represent the influence of the tolerances to be evaluated. The worst case is identified by the three-way standard deviation (3s). Linearity between measured variables and parameters is a fundamental assumption when using the RSS method, but also contributes to its limitations, e.g., modeling of saturation effects is not possible.
Monte Carlo analysis
Monte Carlo analysis, perhaps the most well-known method for evaluating design tolerances and worst-case conditions, is frequently treated as a universal remedy. Monte Carlo is a purely statistical method—a function of specified parameter tolerances and associated distributions (e.g., normal or evenly distributed)—in which the tolerance values are varied by a random algorithm over a number of simulation runs.
An advantage of Monte Carlo analysis over the previously noted procedures is that the parameter values can lie anywhere within the parameter area to be evaluated, thereby eliminating restrictions to the resulting calculations. For the validation of statistic behavior, this analysis is probably the most important method commonly available to the developer. However, it is often impractical to use in determining the worst-case behavior because of the very large number of simulation runs required. With complex designs, this can lead to computations lasting for weeks.
Each of the above methods has the advantage of being simple to implement, and some may reach conclusions that are close to the worst-case condition. All have the disadvantage that they do not directly seek the worst case and may miss it entirely. The new Worst-Case Analysis tool in Saber solves these issues.
After two years of research and development, as well as collaboration with customers, the Saber WCA tool was made available. The tool's capability finds worst-case behavior quickly and accurately. The significant difference between the WCA tool and the aforementioned analysis methods is the use of search algorithms, which seek out the worst case automatically. An overview of the steps in this process is presented in Fig. 1.
Fig 1. Worst-Case Analysis tool workflow
The developer specifies the design under test (DUT) to include parameter definitions (tolerance-afflicted) and portions of the design to be evaluated. Based on this information, the tool determines which parameter values and individual design parameters belong to the worst case and the worst-case behavior.
Here is a simplified example of the algorithm functional steps to search for and determine the worst-case behavior:
- Start simulation with initial parameter configuration (e.g., nominal case)
- Evaluate results of simulation and pass results to the search algorithm
- Determine new parameter configuration for next simulation
- Repeat steps 2 and 3 sufficiently until the worst case is determined
- Supply worst-case search process results back to the user
To optimize the worst-case search process, Saber includes a flexible method of calibrating and combining the search algorithms, and utilizes a library of global and local search algorithms. In order to ensure the tool delivers the fastest results possible, an early, automated optimization is made by concatenating several search algorithms. The simple example on the next page illustrates the function of this new WCA solution