Fast networks and low-cost processors are making distributed systems for real-time applications attractive from a simple cost-of-goods perspective. However, the complexity of the communications code has made them impractical in terms of development costs.
Distributed real-time systems applications typically depend upon fast communications between nodes for the system to work. The communications required fall into several classes:
- Signaling. Repetitive data that is sent from one application to others;
- State synchronization. Distributing asynchronous updates to variables within hierarchical structures from one application to another or multiple others;
- Event management. Sending status from multiple nodes to a single event manager;
- Bulk distributions. Moving large amounts of data from one node to another or to many others.
Low-level protocols such as User Datagram Protocol/Internet Protocol (UDP/IP) and Transmission Control Protocol (TCP/IP) provide adequate transport services. However, it is just too hard to do the additional programming to support distributed application services such as discovery finding out who else is out there-or communication channel setup, and quality-of-service levels, and hot-swap substitution.
Middleware such as Common Object Request Broker Architecture (CORBA) and the Java Messaging Service (JMS) reduce the programming effort and provide useful services; however, the program size, complexity, processing overhead, and lack of determinism increase latency and inject too much uncertainty to be useful in real-time control logic.
Several popular technologies provide the foundation upon which communications standards for distributed real-time systems can be built. One requirement that may not be apparent is that distributed real-time systems need to be able to communicate with office automation technologies. In the new appliance-centric Web services environment, there are several reasons why this is so:
- Information exchange. The data used in or generated by the real-time components is important to the enterprise;
- User interfaces. Operator training is reduced when familiar PC browser and windowing technologies are used;
- Economics: With R&D dollars being spent on making end-user systems more powerful and less expensive, there just isn't enough return on investment (ROI) in smaller market niches.
There are three broad technologies around which the communications standards for distributed real-time systems are emerging: the Internet protocols, such as TCP,UDP and HTTP, commercial middleware and publish-subscribe.
TCP, UDP, and HTTP are ubiquitous and have active standards organizations behind them to ensure their ongoing vitality. Commercial middleware such as CORBA sit atop the operating system, or, for communications, the network stack, reducing programming effort and time-to-market.
However, the one-to-many aspects of signaling, event management, state synchronization, and bulk transfers in a net-centric distributed environment are best served by a publish-subscribe programming model.
A typical publish-subscribe system is provided via middleware and organized around the concept of publisher and subscriber objects. These objects establish logical data streams typically identified by a topic a string such as "boiler_1_temp". Communication occurs between the publishers producing topic "issues" and subscribers that have registered an interest in that topic.
The middleware distributes each issue so that the applications don't need to constantly manage who's supposed to talk to whom. In response to customer requests from a variety of markets, several companies are working through standards organizations to define publish-subscribe communications standards for real-time applications. The standards address two aspects of the communications:
- Application interface. End users want an industry-standard set of interfaces to free them from single-vendor tyranny. If their application uses a standards-based interface, the company can simply swap out one vendor's middleware for another's without changing their application.
- Wire protocol. End users want an industry-standard protocol so that they can mix and match products from multiple vendors. Most wire protocol standards also let them mix and match different versions of the protocol so that they can augment existing products over time.
- The broad objectives are to define a data distribution service for real-time applications. The service should address the full range of data communication requirements for distributed real-time applications. Service should be implemented on standard protocols so that it can coexist with complementary protocols on the same wire such as CORBA, DCOM, and HTTP and communicate directly with workstations configured with the appropriate middleware.
The standards effort is being driven through the two key standards organizations for distributed computing over standard networks:
- The Object Management Group (OMG), the sponsors of CORBA, to define an application service;
- The Internet Engineering Task Force (IETF), the sponsors of the Internet Protocol, to define the low-level message formats and protocols.
The OMG has been managing the development of the CORBA standard and its accompanying services for example, Name Server and Notification Server. The basic CORBA communications model is that of method invocation on remote objects. The method interfaces are defined in the OMG Interface Definition Language and an IDL compiler generates client stubs and server skeletons used by the programmer to provide the supporting code. Data is communicated indirectly through input and output arguments in the method invocations or through their return value.
In many real-time applications, the communication pattern is often modeled as a pure data-centric exchange. That is, applications publish (supply or stream) "data" which is then available to the remote applications that are interested in it. These types of applications are found today in C4I systems, industrial automation, distributed control and simulation, telecom equipment control, and network management.
Historically, distributed shared memory has been used to distribute data. However, this model evolved to distributed data among processors sharing a common bus and is difficult to implement efficiently over the Internet. The data-centric publish-subscribe model offers an alternative model; unfortunately, many vendors provide solutions that although similar in concept are incompatible and proprietary. Examples of commercially available data-centric publish-subscribe systems are: NDDS, from Real-Time Innovations Inc., Splice from Thales Naval Systems (Paris), JMS, a standard from the Java Community Process, SmartSockets from Talarian Corp. (Los Altos, Calif.), and Rendezvous from Tibco Software Inc. (Palo Alto, Calif.).
Last September the OMG published a request for proposal (orbos/01-10-01) for a data distribution service (DDS) for real-time systems. A DDS specification would establish a uniform, well-designed set of interfaces that allows application developer to implement with confidence of ongoing availability and multivendor solutions.
Data-centric publish-subscribe (DCPS) system adds a data model to the publish-subscribe communications. The exchange of data items is now the focal point of communications as opposed to the servicing of events. The data model allows the application developer to express:
- Complex structures composed of heterogeneous data types and relationships among individual data items within the structure;
- Aggregation and consistency relationships among individual data items;
- Quality-of-service requirements in the data exchange and management.
The distinguishing feature of DCPS systems is the inclusion of a data model that allows the refinement of the data streams. The OMG DDS for real-time systems is still work in progress. The final specification is expected by September 2002. The real-time publish-subscribe protocol specification is in the final stages of preparation and is planned for delivery to the IETF this quarter.