The options in detail
Gigabit Ethernet is a widely adopted data link layer for wired data communications. The standard’s interface rate is being increased to meet bandwidth requirements from 1 Gbits/s to 10, 40 and 100 Gbits/s. 10G Ethernet has become popular in recent years and can connect to various physical layers (PHYs) through optical fiber or copper physical media.
In 2010, the IEEE802.3ba standard was established to support 40G and 100G Ethernet; here, either four or 10 lanes with 10- or 25-Gbit/s respective signaling are used to achieve a 40- or 100-Gbit/s respective data rate.
Gigabit Ethernet can be used as a backup connection for either short- or long-reach data transport, as it delivers packet-based non-real-time data for applications that can tolerate the communication latencies. Latency can be reduced in certain cases through cut-through operations in Layer 2 switches, where data packets can be forwarded as soon as the destination MAC address is received.
Low-cost, low-pin-count PCI Express is a standard bus architecture widely used in consumer, server and industrial applications, primarily for computer expansion to peripherals such as graphics cards, server motherboard interconnects and computer-based control systems. Created in 2004 by Dell, Hewlett-Packard, IBM and Intel, PCIe can support up to 32 lanes. Each lane in PCIe version 2.x can support a 5-Gbit/s data rate; each lane in version 3.0 can support 8 Gbits/s. PCIe version 4.0, going through the specification process now, is expected to support 16 Gbits/s per lane.
PCIe can form a tree topology (Figure 3), with nodes connecting to one another via point-to-point links. Visualize the root node as the root complex, the leaf nodes as endpoints and the nodes that connect multiple devices to each other as switches.
Figure 3. Example of a PCIe standard bus architecture tree topology.
Click on image to enlarge.
The Common Public Radio Interface and the Open Base Station Architecture Initiative both target wireless basestation applications and are used for baseband interconnects to RF radio heads. CPRI and OBSAI have similar radio interfaces but with different feature sets; OBSAI enables interoperability among different vendors’ radios, while CPRI is widely adopted by major basestation OEMs and is more focused on the PHY and link layers.
CPRI/OBSAI can support 6.144 Gbits/s per lane. The latest CPRI version, 4.2, can support 9.8 Gbits/s per lane.
Traditionally, data converters use high-speed, low-voltage differential signaling or low-speed, JESD207 parallel interfaces; but as more bandwidth and antenna paths are required in a system, the parallel interface puts a heavy burden on SoC package, size and cost.
The JESD204 serial standard provides gigabit serial links to support a high sampling rate, as well as more antennas, with greater area and cost efficiency.
JESD204B supports one link with multiple, aligned lanes, with each lane supporting up to a 12.5-Gbit/s data rate with deterministic latency.
An example application would be the use of JESD204B as a serial link between the wireless small cellular basestation processor and the integrated DAC/ADC analog RF front end.
As a result, the basestation can be built on a much smaller footprint with much lower power requirements, offering a cost-effective small cell solution.
Texas Instruments’ HyperLink multicore architecture uses a proprietary protocol on top of the serdes function, with four links each running at 12.5 Gbits/s, for a total of 50 Gbits/s. HyperLink not only supports high throughput between devices, but does so without requiring a complex software protocol. Each linked device can be simply viewed as a memory-mapped device separate from the others, and can access the memory and peripherals accordingly.
This greatly simplifies interchip communications and allows systems to scale easily by interconnecting multiple KeyStone-multicore-based devices for applications such as wireless basestations, media gateways and cloud computing servers, which all require multiple chips on a single board.
Another serial I/O architecture is RapidIO, a packet-based interconnect largely used in embedded systems such as DSP-based applications. It provides high-speed data transfer with low latency, as well as the ability to interconnect multiple endpoints.
Serial Rapid IO is widely used in wireless infrastructure, video and image processing, military radar, server and industrial applications. The layered architecture includes logical, transport and physical layers to facilitate message passing, intercore communications through shared memory, data streaming and traffic flow control. Serial RapidIO supports up to 16 lanes, each running at up to 6.25 Gbits/s.
Other serial links include Infiniband, popular in server and high-performance computing installations, and Serial Advanced Technology Attachment (SATA), often found in storage devices.
Whether connecting devices within equipment, devices to backplanes or interequipment devices, gigabit serial links are the ultimate gateways for meeting next-generation data bandwidth requirements with lowered cost, simplified design and infinite scalability.
About the author
Zhihong Lin is strategic marketing manager for Texas Instruments’ wireless basestation infrastructure business, responsible for defining and planning key requirements for multicore SoCs for basestation applications. Lin holds an MS in electrical engineering from the University of Texas at Dallas.