In today's fast-changing market environment, embedded-systems developers must struggle with last-minute design changes and a growing list of potential target architectures incorporating multiple specialized processors. These developers must also deal with complicated interactions between software modules that are hardwired to low-level system resources, which cause delays and lengthen costly software development cycles to accommodate design changes.
Indeed, in the typical embedded development environment (which has only worsened with the emergence of a variety of connected designs), software issues rather than hardware issues tend to have the greatest share of complexity.
A new approach, the Hines-Ortega methodology, addresses this complexity by enabling designers to work at a higher level of abstraction. In essence, Hines-Ortega makes it possible for designers to develop, test and debug complex software more efficiently-and independent of low-level implementation issues. This coordination-centric methodology lets designers separate functional behavior from the coordination of separate software components, thus simplifying design and debugging, speeding integration with hardware, enabling reusable code and allowing easy retargeting of embedded designs to different hardware architectures.
In embedded-systems design, software traditionally has lagged hardware development; it has also contributed heavily to extended schedules and delivery delays for embedded-systems products. Yet, as companies insist on even shorter time-to-market, embedded software designers find themselves trapped in design methodologies that force them to wait for hardware before they can begin development in earnest. When the designers are finally able to begin software development, they find that each project demands custom designs, because potentially useful software from previous projects is tied too tightly to specific system configurations to permit easy reuse.
Companies that rely on embedded systems for product differentiation risk even more extensive delays. Traditional development methods are collapsing under the stress of tighter deadlines. In fact, the trend toward architectures based on multiple heterogeneous processors compounds the burden on development.
Array of processors
At the architectural level, developers are finding an array of low-cost special-purpose processors for network, graphics and digital signal applications. These specialized processors promise to increase system capability while lowering overall costs compared with designs based on one, expensive general-purpose processor. For developers of embedded systems, however, the growth of potential target architectures threatens development resources and capabilities in a market where more than 50 percent of embedded product schedules are already several months behind.
Indeed, hardware design changes that occur late in development exacerbate problems present in heterogeneous, distributed and networked multiprocessor-based designs. In traditional development methods, any change in the underlying hardware architecture translates into a major change in associated software, because developers are unable to separate software functions from low-level interaction with hardware and with other software components. The details of hardware-dependent communications and control mechanisms are combined-literally-with software behavior. Consequently, any change in hardware configuration requires significant modifications in software.
Beyond dealing with the sheer complexity of multiprocessor architectures, product teams find traditional development strategies inadequate. Typically, companies looking to exploit these architectures use separate development teams, each focusing on a specific processor type. In these cases, the development teams lack a consistent system-level vision, so system-level considerations are inevitably submerged or compromised at best. Development then deteriorates into a series of suboptimal decisions-often, responses to problems uncovered when a change in one subsystem ripples across other subsystems. The notion of software reuse and platform retargeting is lost amid the myriad of details needed to get the design out the door.
The efficiency of software in embedded systems depends on separating the behavior of software components from the implementation details needed for the coordination of components with other application components, service routines, operating-system software or hardware resources. This focus on separating coordination from behavior is embodied in the Hines-Ortega methodology. To create the separation between behavior and interaction, this new methodology introduces a higher level of abstraction for embedded design. The new methodology provides a conceptual framework for tools for graphical design entry, simulation, system-level debugging, platform targeting and code synthesis.
In the Hines-Ortega methodology, developers approach embedded-systems design in two phases: target-independent and target-dependent phases. In the target-independent phase, developers design the system as an abstract model, simulate the system model and debug its operation to ensure correct operation at the component level. In the target-dependent phase, developers map the verified design to specific hardware resources, automatically generate platform-specific C or Java code from the mapped model, reuse legacy code and engage in normal unit and system test procedures.
The ability to complete these target-independent and -dependent phases stems from the designer's ability to create an abstraction of an embedded system. This abstraction isolates software behavior in components and software interactions in coordinators, which describe the details of interactions between components. Components interact through associated coordination interfaces, which expose only the parts of a component that interact with other components. In turn, coordination interfaces connect components to coordinators, which explicitly describe the connections, states and transactions that are possible between components.
The coordination-centric approach emphasizes explicit coordination between loosely coupled software components, deferring implementation-specific issues until later in the process. Consequently, software developers can begin creating software early in the system development process-in parallel with hardware development, but independent of specific implementation details.
In practice, the application of the coordination-centric methodology helps developers work more efficiently by focusing on application issues rather than on implementation details. For example, in building a simple round-robin scheduler, designers would create individual components that communicate through their coordination interfaces with a coordinator. The coordinator contains all the information about the round-robin scheduling protocol. With that approach, no component is required to keep references to other components. In fact, the components do not even need to know that they are coordinating in a round-robin protocol. Consequently, designers can focus on the application requirements instead of interaction protocols and implementation details. The coordinator itself maintains all required references, and if requirements dictate a change in the scheduling protocol, changes are confined to the coordinator or coordination interfaces.
After designing the abstract system model, designers can verify correct coordination of component behaviors by simulating and debugging the model. The coordination-centric design enables a powerful system-level debugging capability. It also provides a system-level view across multiple software modules and lets developers easily push down in the software hierarchy as needed to isolate and fix problems. Most important, this capability remains entirely independent of implementation.
Indeed, the same abstract design can be implemented on a heterogeneous multiprocessor platform or on a monolithic single-processor platform. In the final step of the target-dependent phase, designers map the components and coordinators to specific target platform resources such as communications channels, subsystems and processors. The code-synthesis phase generates software that is customized for the given target hardware and operating system without requiring an intermediate software abstraction layer. That is, the generated software makes calls directly into the native operating system to exploit its capabilities and unique personality.
The ability to decouple behavior from interactions sets the coordination-centric methodology apart from two common approaches used in system design- object-oriented programming (OOP) and Unified Modeling Language (UML) methods.
Designers using OOP define the set of interfaces that one object presents to another as a set of methods, or procedures.
In theory, OOP's traditional ability for dynamic type checking permits objects to interact without prior knowledge of the rules of interaction. In practice, however, dynamic type checking has given way to the use of static type checking to ensure predictable interactions. The use of static type checking requires that objects contain certain knowledge about the methods available in other objects.