Some of the most innovative and revolutionary results occur when methods from one field are combined with methods from another. Networked file systems, digital signal processing and graphical user interfaces combined with Internet-enabled hypertext are just three of many examples. A new juxtaposition is emerging that could be just as profound in its effects--combining the traditional embedded paradigm with the classical network-based client/server model.
Without a doubt, the Internet revolution is reshaping our world and our expectations of it. Extrapolating this revolution to its logical end, we can soon expect to live in a completely (and to some extent, constantly) connected world. We expect this connectivity to conform to naturally occurring domains, such as one's personal space, house or car, local business or global enterprise.
Each domain uniquely defines its own needs and restrictions, leading to a wide spectrum of computing requirements. For example, personal connectivity will be through devices such as cell phones, personal digital assistants (PDAs) or even the newly emerging biocomputers. This domain has severe size and power restrictions, which will naturally limit the local computational power.
Real-time for mobile apps
At the next sphere of interconnectivity, mobile information systems have less severe physical constraints, but have instead a significant real-time component to effectively handle navigation and control. At the home, business and enterprise levels, the different communication applications will make available bandwidth and latency top concerns. The common thread that ties all these domains is that increased connectivity brings information quickly and transparently to where it is needed.
What is it that will drive this connectivity revolution? Clearly, the core of the connectivity revolution has already been laid by the Internet. It has given us the infrastructure to create a globally wired world where information can move about freely. This infrastructure will still need to evolve to cope with the scalability demands being placed on it. We see this evolution occurring along several dimensions: storage-area networks, global file systems, systems-on-chip, RISC and DSP, and new software protocols.
Even with such necessary elements in or nearly in place, something is missing. That link will be the formal combination of embedded systems with the client/server computing paradigm.
Many embedded systems we can point to today are obviously client/server architectures, such as cellular telephones and the basestations that drive them. But those hybrids of embedded and client/server computing have not unleashed the revolutionary power that we expect.
Surely, cellular telephones have had an impact on society. Surely, they have revolutionized the way that many of us do business today. But a key benefit of the client/server architecture model is that it gives developers the freedom to partition and combine applications between clients and servers to meet application-specific requirements. Cellular telephone devices have not given us the freedom needed to implement new applications nor create new business platforms in the way that PCs and the Web have done.
Let's look at how the combination of embedded systems and true client/server computing can open up revolutionary new possibilities. From the perspective of the embedded-systems programmer, each application domain defines its own needs for compute and connectivity capacity. For example, personal systems like PDAs and cell phones are limited in their capacity by physical size and power constraints. Similarly, the bandwidth requirements of an integrated "smart" house are likely modest but constrained by having to modulate communications over existing ac power lines.
At the other extreme, the bandwidth demands of a networked game console or set-top box are almost insatiable. From those examples we see that the physical application domain is the primary design point for embedded systems. Paraphrasing Einstein, the hardware must be powerful enough to satisfy the computing and bandwidth requirements, but no more powerful than this (in order to reduce cost and power requirements).
Given a hardware specification, the embedded designer must figure out how to shoehorn the appropriate software onto the system to meet all the functional requirements of the final product specifications. Implementing product services and functionality can be difficult anyway, often requiring innovative solutions to challenging user-interface and customization demands that are made more difficult by the physical constraints of the embedded environment.
This problem can be made easier if the designer can defer some functionality or services to remote servers. For example, one of the biggest challenges facing digital camera design is how to store the images until they can be downloaded to a host system. Solutions range from flash file systems to removable floppies, but any solution places severe demands on the hardware to provide the storage and on the software to manage it. Now consider the design space of a "connected" digital camera. In this architecture, the images are immediately uploaded to a server, so that most of the storage burden is removed. One might still want to keep the last picture or two around for previewing.
Higher-level functions such as browsing, deleting outtakes or renaming can also be deferred to the server, so that the camera only needs to retain the user interface and picture-processing software. Admittedly, some of the storage hardware is replaced by the new wireless interface, but this hardware benefits from commodity influences on price and density.
The key point is that by focusing on the camera's primary function--taking pictures-the hardware design is simplified, which lowers the power consumption and price of the device, while improving its robustness and time-to-market. As well, the software engineer's job is dramatically simplified, since time and space-consuming high-level functionality is not forced to be on board the embedded device.
In general, each level of connectivity resolves to the availability of a service, and the issues of connectivity, devices and functionality are replaced by the definition of servers, clients, protocols and services. This is not just a change in nomenclature, but rather a suggestion of how we can leverage a whole new set of solutions in our efforts to create a completely connected world.
For a given application, the developer must trade off the functionality resident on the client against space and communication bandwidth constraints. In the worldview of the typical embedded-systems programmer, applications are partitioned as a function of device constraints. For example, space- and power-constrained applications must lean toward the thin-client side, where most of the functionality is accessed dynamically from the server. Let us consider a cell phone with a Global Positioning System to find routes and locations.
Because the phone has such little memory, most services should be executed at regional servers, which cache local maps from a main database. Other applications, such as an in-flight navigation system, may choose a thick-client approach, partially because they have fewer physical restrictions, but also because, in this example, fault-tolerance considerations may make it undesirable to rely on a noisy communication channel or the continuous availability of working land-based servers.
It is also possible that the particular client-resident functionality is dynamically varying. Consider a PDA that can display stock quotes, e-mail or theater listings. Since each person has different use habits, not all those capabilities should be present on the PDA at boot time. So the device begins as a thin client, and then dynamically downloads the needed functions on demand, and caches them for future use if space is available. Thus the PDA becomes a thicker client capable of doing more autonomously.
This approach has other advantages as well. For example, the downloadable software can be administered centrally, so that each client always gets the latest version of the software. As well, patches to the OS can be incorporated dynamically, without having to return the PDA to a physical service center.
In those examples, and most examples of embedded systems today, we find that the client/server partitioning is purely device-driven, and represents neither a scalable nor an open architecture. Consequently, every change in the device means either a poorly partitioned system or a newly obsolete one. Thus, the revolution never reaches critical mass.
Services at center
We believe that a new opportunity exists to think first about the client/server partitioning, particularly about every system service or resource as something that could be provided by a server somewhere. Client computers can then be built around those services, incorporating some on the device and accessing others remotely.
This has interesting implications for both client-device and server-device manufacturers. For client-device manufacturers, it means the ability to build multipurpose devices that can provide access to many different services. For server-device manufacturers it means that potentially every client device will drive demand for at least one server.
That scenario could result in server market volumes exceeding client market volumes. If this happens, expect to see more thin servers, which will in turn bring multi-tiered client/server computing into the embedded system world.
Consumers drive designs
Some interesting developments are already taking place in the consumer electronics marketplace.
People are building Web-enabled cameras, video recorders and other devices that can be programmed, controlled and extended using standard Web protocols. Why Web-enable a VCR? If you want to record a show for later viewing and you cannot physically get home in time to program the VCR to record that show, you can use a Web browser to program your VCR remotely.
But client/server architectures are not limited to what can be done via the Web. Indeed, some Web protocols are too heavyweight for embedded systems, and so people are creating kinder, gentler protocols. As long as the implementations of those new protocols are open-source, they stand a chance of joining other standard protocols.
The Java programming language is also helping to make platforms more programmable, but it remains to be seen whether Java will really become an industry standard without an open-source implementation. If such an implementation becomes available, it could certainly provide a strong platform for embedded client/server computing.
Finally, there is a technology that may prove to be the real catalyst for embedded client/server computing: Linux. Linux is already making its mark in traditional client/server computing. It's already passed Solaris to become the second most popular server operating system, and is on track to become the number-one server operating system by 2003.
More recently there has been considerable activity in adapting Linux to meet embedded-system requirements. A number of companies are working with Linus Torvalds, the creator of Linux, to provide state-of-the-art power management and other features required by thin client and thin server devices.
To achieve the continuum between client-server responsibility, one must make the line between the client and server fluid. This can only be accomplished by defining APIs and building an infrastructure that supports the continuum. As more embedded systems take advantage of client/server architectures, we predict that there will be specialization of clients and servers, and that servers which support open-source protocols will become standard in much the same way that the World Wide Web supplanted any number of systems that came before it.
What does this mean to the developers that are making the connected world happen? As always, time-to-market is everything, so the tools used are desperately important. In particular, we believe that the key to productivity is a single set of tools that can span the continuum from deeply embedded to enterprise-level computers. The operating system is clearly a key tool or building block that must be deployed at every level. To span the range of applications, the operating system of the connected world must scale. It must scale down to minimal configurations on esoteric hardware, and scale up to handle high-demand transactional workloads on SMP servers.
This is possible only through a consistent API or a hierarchical set of APIs, and through aggressive source-level configuration. Highly configurable operating systems, scalable APIs and multiplatform development tools make it possible for the entire community to implement both the unique functionality and the common standards needed to bridge the disciplines of embedded-systems programming and client/server computing.
Just as open source was the key catalyst for making the Web truly a worldwide infrastructure, we believe that open source will be the catalyst for embedded client/server computing. We'll know in 10 years whether or not this prediction is correct.
See related chart