The rapid evolution of the Internet, including the introduction of intranets and extranets, has created an insatiable demand for higher-speed underlying networks. The convergence of voice and video traffic, the need for more secure communications and the exploding demand for telecom and datacom network access all dictate greater amounts of intelligence through the entire network fabric. As application complexity and user expectations rise, so must the availability of well-managed bandwidth.
Historically, network vendors have invested in product designs that use proprietary, ASIC-based system interconnect. But with increased bandwidth and service demands, this approach is running out of steam. New communication and network components are being introduced with complex I/O and bus interface standards.
Many advocate a new point-to-point approach to system interconnect that replaces the more conventional, multidrop bus traditionally found in PCI, CompactPCI and VME designs. In these, multiple peripheral devices share a common bus. Devices are placed at the appropriate hierarchical level according to the performance level they require, and legacy interfaces are connected through bridging. In some systems, functions are partitioned into field-replaceable units.
The primary shortcoming of multidrop architecture is that devices located on a shared bus communicate with each other one at a time, which constrains performance. With legacy bus technologies such as PCI and VME, for example, equipment is limited to transfer rates on the order of hundreds of megabytes per second.
As embedded systems and single-board computers (SBCs) handle multiple requirements, multidrop buses are being exploited to their limit. System designers struggle to increase performance at exponential rates while maintaining or even decreasing operating costs. The challenge is even greater for the telecommunications infrastructure, where ever-increasing bandwidth demands are making higher bus performance essential.
Many techniques have been applied to overcome the capacity challenge of multidrop bus architecture. Designers have increased frequencies, widened interfaces, pipelined transactions and permitted out-of-order completion. Eventually, however, these measures create their own problems.
Higher-performance, point-to-point system interconnect is required if future SBCs and systems are to achieve greater processing power and deliver increased bandwidth yet accommodate multiple drops per bus. An advanced, packet-switched interconnect architecture is needed: a data communications standard that can connect chips on a circuit board and circuit boards across a backplane, one designed for networking and communications equipment, enterprise storage and other high-performance embedded markets.
To ensure compatibility with network, communications and digital signal processors, the standard must be open. It should offer software transparency and ultimate speeds in excess of 10 Gbits/s, and use low-voltage differential signaling (LVDS) technology. It must be a device-level interface for use in environments where multiple devices need to operate in a tightly coupled load/store environment.
This new packet-switched interconnect architecture should be conceptually similar to the Internet Protocol (IP), yet designed specifically for the interface between processor and peripherals where high bandwidth and low latency are crucial. It must permit scalability and future enhancements, and offer several key features including support for high-performance I/O devices, globally shared memory, message passing and software-managed programming tools.
Of critical importance, this ideal new standard must deliver high data bandwidth capability. In addition, it must ensure low latency: high frequency and low packet overhead would provide a much narrower interconnect while still offering better latency than bus technologies such as PCI and PCI-X. The standard must also be transparent to the existing software base, supporting all needed microprocessor and I/O transactions and providing a flexible method for memory-mapping systems. Error-management support in hardware is equally critical.
The standard should have a minimal silicon footprint for low- cost, full-custom ASIC- or FPGA-based designs. It must be designed for common 0.25- and 0.18-micron CMOS technology, and it must provide hardware-supported symmetric multiprocessing through an optional, distributed shared memory extension. Distributed shared memory is used pervasively in workstations and servers, and is becoming more popular in high-performance embedded applications.
The standard should support various topologies such as star, linked star and mesh. And finally, it must be open, providing a common architecture for general-purpose RISC processors, DSPs, communications processors, network processors, memory controllers, peripheral devices and bridges to legacy buses.
There are several movements in the industry away from shared buses toward fabrics. These include CSIX, InfiniBand and RapidIO. CSIX is not a protocol, but rather a proposed hardware interface that targets data-path switching. InfiniBand is targeted as a system-area network interconnect. Built on LVDS technology, RapidIO can scale to multi-gigahertz speeds, and the port width can scale to 16 bits and possibly wider in the future.
Aside from these open standards, proprietary standards are also being developed. Lightning Data Transport from AMD and HotRail fabrics from HotRail Inc. are proprietary device interfaces optimized for PCs and multiprocessor enterprise servers.