(It is more imperative than ever for automotive OEMs to ensure that their automotive systems and components operate safely and without errors. However, isolating error cases in today’s complex automotive electronics systems is an increasingly difficult challenge.
Traditional test methods based primarily on physical prototypes are generally insufficient. New test methods based on Failure Mode Effects Analysis (FMEA), on the other hand, provide effective options for isolating error cases.
This two-part series focuses on a completely automated test procedure that uses the Testify tool within Synopsys Saber development environment to enable a simulation-based FMEA methodology.)
By intentionally introducing errors and analyzing their effects, a developer can examine the operation of an automotive system for the potential loss or failure of various components. If there are a small number of errors, this can be accomplished using physical prototypes. As the number of tests increase, however, the task quickly becomes unfeasible. This is particularly true if the prototypes under test are destroyed, making comprehensive follow-on tests of an introduced error difficult, if not impossible.
Using a simulation-based strategy from the start of development provides significant advantages because the tests are completely reproducible, and the effects of an introduced error can be reconstructed. Furthermore, these tests can be automated, providing a time advantage throughout the testing process. These gains apply both to the introduction of errors and to the evaluation of the effects. This level of automation can be achieved by implementing an advanced multi-domain simulator, such as Saber
, which can deliver a fully automated test-run via its Testify failure modes analysis tool.
Figure 1 – Automatic test run
Figure 1 represents the macroscopic operational sequence of the entire process. The first challenge exists in modeling the system to be tested. Here the switch settings are decided, determining the quality of results and feasibility of the simulation. Apart from modeling the physical behavior of the system and components, it is important that models are characterized to support error introduction and measurement. Saber provides libraries of pre-built models, as well as templates and tools to simplify this effort.
Models having varying levels of detail can be characterized using the integrated modeling languages MAST and VHDL-AMS. After the system is modeled, test criteria are defined, including test cases and boundaries. (Details of this process are described under “Defining the test run” later on) Once the test criteria have been defined, the developer can run the system simulation and execute the defined test cases. The process of evaluating and plotting the data generated by the test runs can be completely automated.
Modeling and introducing errors
With error modeling included in the physical model characterization, failure cases can be examined for their effects. Because the effects of failures on the system depend on the application and timing of the occurrence, it is important to automate error insertion. A primary goal here is to investigate not an individual error, but hundreds or thousands of errors. The task of characterizing each individual error and then adjusting the system model would be laborious and would exceed an effective project schedule. Rather, the goal is to allow the defined errors to flow automatically into the simulation without having to manually change the system model for each individual error.
Figure 2 – Modeling and error introduction
Figure 2 shows an example of the function mode of a voltage divider, which is one component of an overall system that is subjected to a safety analysis. This test case examines the effects of a signal interruption on the voltage divider. The original circuit would require manual insertion of an interruption at the "Error" node in order to simulate the error. Instead of manually setting the net connection to open or closed over and over again, the simulator can be set up at the beginning of the test process to insert the error automatically. In this simple example, the use of an ideal switch is placed on the schematic and is switched on or off, depending on the error selection.
Defining the test run
After defining which errors are to be simulated, the next step is to further define the test run (see Figure 3, below), using the following steps:
• Error selection
• Analysis selection (e.g. time or frequency response)
• Test criteria definition (e.g. rise time, signal delay)
• Value limit settings (e.g. upper and lower threshold values)
Figure 3 – Test definition