In 1984 John Gage, an employee of Sun Microsystems, coined the phrase "The network is the computer." Scott McNealy, CEO of Sun Microsystems, borrowed the phrase and turned it into a marketing slogan. Today more than ever, this concept of the network being the computer has become a reality.
The network is used to distribute command, control, and status in addition to providing a transport layer for data distribution. The evolution of a standard network-based real-time processing architecture would provide for the development of highly scalable, distributed, open architecture, real-time processing systems.
Today, many real-time processing systems will no longer fit within the confines of a single chassis. In the past, bus bridges were used to allow interconnecting several chassis. While this worked for some distributed real-time processing, the high-rate data streams of today need to be processed by systems that can distribute the data between several processing elements without concern for chassis boundaries.
It is now possible to build an embedded processing system that consists of many processing elements that are interconnected by a network. If embedded engineers can do the trunk traffic engineering properly, networking technology today can provide an inexpensive way to free embedded solutions from the chassis backplane bus boundary.
There are many COTS processing elements currently available that can be used to assemble high-performance real-time processing systems. In most cases, it will require technical experts to combine these processing elements to form a functional system. These COTS processing elements can also be very expensive and may require a complete investment in all of the components necessary to build a given system prior to beginning any development.
In addition, it is also necessary to invest in development tools and embedded operating systems such as VxWorks and C/OS that have the support for loading code and reconfigurable hardware designs, commun ication between processing elements, and device-driver support. A developer may spend $250,000, prior to expending any labor hours, just to assemble a target system and development environment.
I believe that if industry could agree on a few standards, much of the development today could be done using Linux and Eclipse running on commodity computers as much as possible for a lot less money. We need arrive at a point where a developer with no FPGA experience can develop applications that use FPGA fabric to accelerate processing.
The marriage of reconfigurable hardware and software is inevitable. It will not be long before commodity computers make FPGA fabric available to the application developer. Intel and AMD have opened up the front-side-bus (FSB), and vendors such as Nallatech are building FPGA modules that plug into what were processor sockets.
Imagine having a platform where a compiled program, when loaded, has components that are text code segments for the given CPU to execute and components that are the bit images to be loaded into an FPGA. The CPU and FPGA will work together to perform the execution of the application.
The FPGA may be the coprocessor for the CPU or the CPU may be the coprocessor for the FPGA. Tools will be required to enable this type of development. I believe MatLab and LabView are tools that could easily support this type of development provided our industry can agree on the interfaces.
It would not be difficult to build a complex real-time processing system if COTS processing elements supported a network interface and implemented a standard set of commands over TCP/IP. A standard network-based distributed real-time processing architecture would allow processing systems of the future to be put together in much the same way as modern home-entertainment systems.
In the home-entertainment market, many companies can supply components that can be easily interfaced to create very complex systems. To accomplish this, manufacturers have agreed on standard connectors, electrical interfaces, and protocols. These standards allow a CD player to be connected to a receiver, graphic equalizer, TV, DVD player, and speakers. The end user is able to select the exact equipment necessary to satisfy their requirements.
Today, there is no standard for transporting real-time streaming data over a network between processing elements. We are beginning to see modifications to the underlying asynchronous TCP/IP protocol emerge, such as IEEE 1588 real-time networking specification and its underlying Precision Time Protocol.
This will not only transform the way Ethernet is used in industrial networking. It will also support instrumentation and measurement and broader Internet and communications infrastructure such as wireless/mobile network apps including CDMA, GSM, and femtocells.
It is imperative that the real-time processing community agree on standard interfaces so that we may develop modular, component-based solutions. Several factors allow movement in this direction.
First, the cost of commodity processing elements is coming down, allowing commodity servers to be purchased in quantity at a reasonable cost. Second, Ethernet networking technology is now very affordable and is available in 10, 100, and 1000 Megabit per second sustained line rates and 10 Gigabit Ethernet is just around the corner. Third, Linux distributions have become well supported and affordable without a license fee.
Distributed multiprocessor, multibox, and multiprocess processing can now be implemented affordably in rack-mounted computers, running Linux, and connected via Gigabit Ethernet using nonblocking network switches. Interfaces between boxes and processes are implemented using TCP/IP for data distribution, command, and control.
Architectures such as AdvancedTCA and MicroTCA provide a set of standards for networked distributed applications within a chassis. We need to expand on ideas such as this and create the set of standards to support open architecture distributed real-time processing with no chassis boundaries.
Companies, such as Xilinx, are bringing network connectivity directly into FPGA logic, such as in the Xilinx Virtex 4 and Virtex 5 products, allowing custom electrical/optical interfaces and processing elements to be implemented by FPGA based processing systems. A standard architecture would allow many different companies to supply the processing elements that could be used to form complex systems.
The community should build on the gathering momentum behind distributed common open architectures by providing the architecture and associated technology to provide developers the ability to build highly scalable, distributed, open architecture, real-time processing systems.
A good start would be architecture and technology elements that take advantage of commodity servers with integrated FPGA fabric, network attached storage (NAS), Linux, Gigabit Ethernet, TCP/IP, and network attached FPGA fabric. I hope it will not be long before we hear, "The network is the real-time processing system".
Mark D. Wecht, is president of Embedded Systems Design, Inc. in Elkridge, MD.