The use of executable code generated from a Unified Modeling Language (UML) design model enables early design verification and serves as an effective technology to shorten development time and reduce project risk for embedded systems.
Why verify embedded designs in the early stages of development? Why not follow the traditional waterfall approach? In that approach, analysis is followed by design, then implementation and, finally, test. It is now commonly accepted that this traditional approach is not the most efficient pattern for software development, and even less so for embedded systems.
Software is hard to build; embedded software is harder. And real-time, deterministic embedded-control systems are extremely complex. They often include time-based functionality and may have other mathematical or algorithmic aspects that are difficult to statically analyze and design.
Furthermore, it is difficult to clearly describe the desired behavior of such systems in typical specifications. This makes the starting point for the software definition process complex, as well as prone to being incomplete and ambiguous. The use of a standard, however-an unambiguous visual language such as the UML-to construct embedded-systems specifications is a topic for another discussion.
Building the wrong system- meaning the requirements are wrong-is as prevalent as, and much more expensive a problem than, building the system wrong; meaning the implementation is wrong. It is a difficult challenge to determine whether the design is correct, and it's one that is exacerbated if the verification of the design does not take place until the entire system is built and tested.
It is very typical for an embedded system to require that either a portion or the entire hardware platform be custom-developed for the project at hand. When the traditional waterfall process is used, the start of the testing process is delayed even further, following a lengthy wait for the custom hardware.
The waterfall process makes verification of the system much more complex. When system problems are discovered, one does not have the benefit of having separately verified the software, so the problems are harder to isolate. Testing systems only at the end of the development process is a complex and therefore time-consuming and expensive undertaking.
Typically, the market window and development time line are very short for embedded systems. The only way to identify problems in a completed design is to know about and specifically look for them, either with testing or with formal mathematical analysis. The discovery of such problems late in the process may entail a large of amount of rework or redevelopment, which can mean entirely missing the market window.
By contrast, early verification enables you to ensure up front that the software does what the system specification describes. It makes it possible to check and change the core behavior of the software before the hardware is available. Later, the hardware can be verified with tests that use known-good software behavior.
Early verification shifts the correction of design errors from the test phase back to the design phase, where they are much less costly to fix. After all, design errors should be corrected in the design phase, not at the end of the development process.
In order for verification of a design of any reasonable scale to be effective, it must be possible to execute the design at all stages of development, starting from the earliest stages. It must be possible to test small pieces individually and build them into larger assemblies for larger tests. It's not feasible to require that the entire system, or even large pieces, be built before any verification can be done.
Verification must be done beginning with the first few fragments of the design model. The idea is to construct a large-scale system from smaller pieces that are known and demonstrably error-free.
To understand if the execution of the design matches the intention, it must be possible to compare the execution of the design with the design itself. The ramifications of this are twofold.
First, the executable form must come directly from the design model. Manually creating a prototype to emulate the design is not satisfactory, as traceability back to the design is lost. Testing a manually created prototype merely tests how well the prototype was coded. It does not directly test the design.
In addition, generated code itself must be directly and closely related to the design. Schemes requiring engines or virtual machines to execute the design are less useful than directly converting the design into code, because the virtual-machine technique introduces many additional questions about the source of behavioral problems. For example, is the observed problem due to the design or due to a problem in the virtual machine?
Second, and far more important, it must be possible to associate the states of the execution directly back to the design. The accepted technique for doing that, known as design-level debugging, is to provide feedback highlighting the design model as the executable is tested.
This is analogous to a source-level debugger highlighting textual source statements while executing the compiled binary. Embedded systems are highly behavioral and commonly use state machines, or-in the case of the most common design language, UML-state charts and activity diagrams. It is imperative for the designer to be able to execute the design and watch the state transitions in order to understand the behavior of the system.
Another important behavioral diagram in the UML is the sequence diagram, which shows the ordering and interaction of interobject messaging across time. A valuable verification technique is to use a model-level debugging tool to capture the actual messages sent during a verification pass and difference them with a sequence diagram showing the intended message sequence, which is the design specification.
Many embedded systems reuse legacy code in order to leverage existing, known-good (tested) designs. In order to enable efficient reuse, the verification must be able to take advantage of the reused design and code without impinging on the validity of the model-code relationship.
If the reused design and code are not verified with the new portions of the design, the interaction between the new and the reused pieces of the system would become a future risk. Such a situation might require expensive redesign at the back end of the development process. The only way to avoid that is to incorporate the legacy design/code directly into the design being developed, and generate code incorporating both. Assembling the legacy code into the generated code later raises questions as to whether the integration introduced any problems not present in the design itself.
When a design is tested during the design phase, the design often must be executed before the target hardware is available. This means that the verification must be executable on the development host, whether it's based on Windows, Unix, Linux or on a wire-wrap hardware platform.
Early verification of a design also requires comparing the software behavior on the development platform with the behavior on the prototype platform and with that on the target platform. Retargeting and redeploying the executable design must also be part of the verification procedure.
A visual application development platform like Rhapsody from I-Logix, which can automatically generate production-quality source code from a UML design model and supports all the capabilities necessary for early verification, provides a rapid and easy way to perform early verification. But several questions still remain.
Should there be one early-verification step at the end of the design phase or a number throughout? If more than one, how many and when? Are the verification vehicles generated ad hoc as necessary or is the functionality for each verification vehicle planned?
The most direct way to utilize code generation for early verification in the design of embedded systems is to plan to generate verification code during the development process. An effective way to do this is to plan to generate code based on specific functionality and to verify it at predetermined development milestones.
An iterative, or spiral-based, development process not only enables such early verification but also provides a well-structured strategy for defining the design in an incremental manner. One such iterative method is the Rapid Object-Oriented Process for Embedded Systems, or Ropes, defined by Bruce Douglass of I-Logix. Iterative life cycles deal with the incompleteness problem by using a more representative model. This model is built on the premise that each waterfall life cycle model is planned to execute more than once.
See related chart