Analyzing and ensuring model coverage
The model provides an opportunity early in the design process, prior to implementation, to perform the types of tests that might be done later with source-code-based testing. Engineers fully stress the controller to verify its design integrity and to detect problems, such as dead sections of the design that would later appear as dead code).
Often, to confirm that a design works properly and is robust, engineers run exhaustive model simulations. However, those simulations are only as useful as the scenarios they are running.
Stress testing the model by running simulations using minimum and maximum numerical values helps ensure that overflow conditions will not occur. It is also important to ensure that the simulations exercise all parts of the design, and all modes and logic branches of the design behavior.
Model coverage analysis assesses the cumulative results of a test suite to determine which blocks or states were executed during a simulation, and which were not. Certain coverage analyses are well established in source code languages, such as C, C++, and Ada, but these types of analyses have not been available at the model level until recently.
2. Requirements-based test coverage analysis.
Modified Condition/Decision Coverage (MC/DC) is considered by the FAA to be the most stringent coverage level necessary to satisfy safety-critical systems. This coverage analysis, among others, is now available within Model-Based Design, and is conducted based on simulation runs. When performed within the simulation tools, MC/DC enables the automatic logging and reporting of coverage metrics for the model (Figure 2). As a result, the test engineer can assess the completeness of the test scenarios in terms of the design structure. The challenge then becomes defining sets of tests that will efficiently result in complete coverage.
Automatic test generation
A new set of tools for Model-Based Design can automatically generate test patterns that will satisfy specified coverage objectives, such as MC/DC, by mathematically analyzing the structure of the model and using formal methods to generate the patterns. This structural analysis also identifies any portion of the model that will never execute, which might be an indication that something was missed during specification, implementation, or test creation.
These test patterns can then be combined with other test scenarios derived from requirements, testbench data, Monte Carlo scenarios, and plant/environment models to exhaustively test the model through simulations, as well as the actual implementation later on.
Code compliance checking
The Motor Industry Software Reliability Association (MISRA) has published "Guidelines for the Use of the C Language in Vehicle Based Software." This set of guidelines, informally known as MISRA-C, has been adopted by a growing number of automotive manufacturers and suppliers. A significant amount of model checking for MISRA-C compliance can be done by looking at what the automatic code generation tool systematically generates, rather than checking all of the generated code as one must if it is manually created.
But checking the code generation tool doesn't address scenarios, such as imported handwritten legacy code. Open model interfaces can be used to automatically check for these scenarios. Using this approach, the checks are performed after model creation and then prior to code generation to ensure that the imported or generated code passes the checks. Another option is to include a MISRA-C code checker or static analysis tool into the code generation and build process.
Detecting runtime errors
Run-time errors can be particularly difficult to detect at the model level or during simulation, and can cause significant problems during software development and testing. Run-time errors are latent faults that often surface under specific combinations of data values, which make them expensive to find by dynamic testing. In fact, they are generally revealed by their consequences on functional behaviors, including unexpected commands sent to actuators, mathematic co-processor halt, and unexplained and hard-to-reproduce software failures. In these cases, lengthy debugging is then necessary to trace the problem to its source. 
Static analysis is one approach to addressing run-time errors. In recent years, static verification tools have been introduced that apply advanced static analysis techniques and reduce the number of "false positive" results that require manual inspection or testing. Such tools perform static and some dynamic analysis of C code, regardless of whether the code is handwritten or automatically generated.
The integration of these verification tools with tools for Model-Based Design provides significant improvements to the workflow. By connecting the analyzed code and the model from which it was automatically generated, the static verification tool can present its results in both the source code and the model. Being able to navigate from the code to the model, make the change, then automatically regenerate and recheck the code, provides a powerful way to analyze, debug, and modify algorithms using both high-level and detailed perspectives. It encourages a development process in which changes are made to the model rather than directly in the code, which contributes to the longevity and reusability of the models from project to project.
The key philosophies described in this article represent three best practices for leveraging Model-Based Design for test and verification.
First, you should reuse models as a testbench for the implementation. From running the model simulations, to running the implementation software linked against models, to running on a host computer or on the target processor, to running the full embedded system on a testbench, you can accumulate knowledge, test data, and other information in a way that can be reused later in the development and testing process.
Second, you should test as early as possible. Test in simulation before real time, test in real time on the bench before applying to real world, and test the model before the code. Testing early is often easier, with the higher abstraction level, and the cost benefits of catching errors early are well-documented.
Third, you should take advantage of all available techniques. Simulation and formal methods should leverage each other; model-based and code-based techniques for test and verification complement each other; and all of them are available to use with Model-Based Design from a range of tool vendors.
Many automotive original equipment manufacturers and suppliers are using Model-Based Design to generate executable specifications, simulate their performance and automatically generate code. Many of these companies are beginning to take the next step by leveraging their existing models for test and verification purposes. This article surveys a wide range of current methods to illustrate practical approaches that can be used to improve the quality and reduce the cost of automotive embedded software.
- Automotive Engineering International, March 2005.
- B. Aldrich, "Using model coverage analysis to improve the controls development process," AIAA 2002.
- C. Hote, "Advanced Software Static Analysis Techniques that Provide New Opportunities for Reducing Debugging Costs and for Streamlining Functional Tests," prepublication.
- MISRA-C:2004, "Guidelines for the use of the C language in critical systems," ISBN #0-9524156-2-3, www.misra-c2.com.
About the Author:
Jim Tung, is a MathWorks Fellow. He has held the positions of vice president of marketing and vice president of business development before assuming his current role focusing on business and technology strategy and analysis. Jim received a bachelor's degree from Harvard University.