The peripheral component interconnect (PCI) bus is used in many communication systems designs because it is a stable, popular,multi-vendor standard with a wide selection of competitively priced components to choose from. The standardized nature of the
specification allows designers to preserve investments in software and hardware across multiple generations of designs.
In many next-generation applications, however, the PCI bus has insufficient bandwidth for its intended applications, particularly
since it is a shared bus with devices having to contend for available bandwidth. Additionally, increasing the speed of the PCI bus
causes a corresponding decrease in maximum bus length and number of slots. At 66 MHz, the PCI bus only supports two slots or
devices in addition to the master, and PCI-X drops this to just a single device at its highest frequency.
Even if one were to use devices with PCI-X interfaces, an efficient and flexible way to interconnect them is required. An ideal
situation involves a switched fabric without the distance limitations of PCI-X and with enough bandwidth to support multiple PCI-X
In addition to increased bandwidth, today's applications also often require substantially more computing power to run all software
required for all of the sophisticated processing that designers are looking for in their next-generation solutions. It's not
uncommon for applications to require multiple gigahertz class processors to provide the necessary computing power, particularly
for layer 5 and above processing.
Switched fabric solutions using HyperTransport interconnect technology can be used to solve these problems with a range of
different design parameters.
The HyperTransport fabric uses a low pin count, high speed, scaleable interconnect that is PCI compatible, while offering much
higher bandwidth and a greater variety of supported topologies. The basic building block for all topologies is a point-to-point link
that uses low voltage differential unidirectional signaling.
Signaling rates of between 400 mega-transfers (MT) per second and 1.6 giga-transfers (GT) per second with link widths of 2, 4,
8, 16, and 32 data bits in each direction are supported. This provides a maximum bandwidth of over 50 Gbps per link, and the
combination of different signaling speeds and link widths allows the designer to scale the implementation to match a wide range
of application bandwidth requirements. Initial implementations are 8 b wide in each direction and support signaling rates of up to 1
GT/s providing a total bandwidth of 16 Gbps per link.
There are three types of HyperTransport devices: host, tunnel, and single-link device. A host is the primary attachment to a CPU
or a switch. A tunnel is a slave device with two HyperTransport ports, allowing devices to be daisy chained. A single-link device is
a slave device that forms the endpoint of a HyperTransport chain. Slave devices can be either bridges to another bus such as PCI
or they can be native controllers such as Gigabit Ethernet or SCSI controllers.
Switches allow multiple chains to be connected and each port can be either a host or a slave. Switches can have not only
different numbers of ports, but the ports can be of different widths and speeds. This allows switches to be used for speed
matching or aggregation functions as well as topology flexibility.
The basic point-to-point HyperTransport links can be combined in a variety of topologies as needed to support the requirements
of the application.
The simplest and lowest cost configuration is a single chain composed of a host and one or more tunneling bridges (see Figure 1).
This allows multiple PCI buses to be connected to provide greater bandwidth and slot counts than would otherwise be possible. For
example, a configuration supporting six slots at 66 MHz would simply require three HyperTransport PCI bridges daisy chained
together. Optionally, an industry standard southbridge part could be added at the end of the chain to provide miscellaneous I/O
capabilities such as USB.
An application scenario with two general-purpose processors, with each one dedicated to a specific task or with each sharing the
load between them on a single task, and an associated acceleration engine implemented in an FPGA or ASIC, all connected to the
I/O fabric through a switch is shown in Figure 2.
Gigabit Ethernet and SCSI controllers with native HyperTransport interfaces are shown in Figure 3. This configuration cascades
two HyperTransport switches to increase the number of available ports.
For applications that require a higher level of redundancy and resiliency in the face of failures, a configuration such as that
shown in Figure 4 can be used. In this configuration each HyperTransport PCI bridge is connected to two links, with the second link
being used as a hot standby. The links are routed through two separate switches to independent processors. No single failure will
cause the entire complex to fail. If a processor or switch fails, the affected PCI bus can be handed over to the other processor.
If any component associated with a PCI bus fails, the remaining PCI bus will continue to have access to the remaining elements of
Resiliency in the presence of various types of faults is very important in communication systems design. The HyperTransport
fabric offers a number of features that help system architects maximize fault resilience in their designs including point-to-point
links, dual host support, and hot plug.
The use of point-to-point links greatly increases the fault isolation capabilities of the system. Due to the PCI shared bus design, a
fault in a single component can disrupt activities on an entire bus segment. By combining multiple HyperTransport PCI bridges
with a HyperTransport switch, a system designer can choose to have as few PCI slots on a PCI bus as they feel is needed to
achieve the level of fault isolation that they desire.
In addition, configurations that include a HyperTransport switch can support hot-plug capabilities for additional fault resilience.
These switches are compatible with standard hot-plug controllers (SHPC) to maximize OS compatibility, and they can be used to
provide hot plug for board-to-board, board-to-backplane, and board-to-cable configurations.
This allows failing components to be swapped out and configuration changes to be made without bringing the system down.
A pin that is asserted during the hot-plug sequence is provided on the switch for each port. This places all the HyperTransport
I/Os for that port in tri-state mode and terminates and flushes all outstanding transactions on the port. Software then
reconfigures devices on the port when the hot-plug sequence is complete.
Each HyperTransport chain can be dual hosted, allowing a backup host to take over the chain in case the primary host or the link
to it fails. When this capability is used with multiple switches and multiple hosts a fully redundant fabric can be created with no
single points of failure as shown in Figure 4.
The combination of point-to-point links, differential signaling, and unidirectional signaling simplifies board-level electrical design
and allows for greater distances between components. The HyperTransport interconnect can be used between chips on a board,
between boards in a chassis, and between adjacent chassis's in a system.
As an inter-chip interconnect, maximum etch lengths of between 24 and 30 inches (0.6 to 0.75 meters) can be used, depending
on PCB stack up. Links can also be run through both card connectors and cable connectors as well as short cables. Because daisy
chain and star topologies are supported, total end-to-end length for a HyperTransport chain can be several meters providing for
great flexibility in system configuration.
The differential signaling uses a 600-mV swing, derived from a 1.2-V supply. On-chip, double ended termination is used with both
receiver and transmitter matching interconnect impedance. The on-chip termination saves significant board area and eliminates
difficult to place components and their PCB stubs greatly simplifying the board designer's task. A quadrature clock is used in
conjunction with double data rate data signaling on the data pairs. There are six fixed signaling rates ranging from 200 to 800
MHz or 400 MT/s to 1.6 GT/s.
Because links are bi-directional with uni-directional signaling, there are two sets of wires, one for each direction. Each set of
wires has a pair of wires for each data bit, a clock pair for each set of 8 data bits, and a single control pair.
The link width is detected during the link initialization and wider devices are required to support narrower interfaces. All links
come up at 400 MT/s and then configuration software basic input output system (BIOS) programs CSRs for the desired
operational data rate and width based on the capabilities of both ends of the link.
Compatibility with the PCI local bus standard was an important design goal of HyperTransport from the beginning. The goal was to
allow PCI device drivers to be used unchanged, to allow PCI and HyperTransport devices to fully interoperate, and to have a
common programming model for both PCI and HyperTransport.
Items that were specified to support this compatibility include support for PCI bus bridging semantics, PCI producer/consumer
ordering for requests, support of all three PCI address spaces (configuration, IO, and memory) and PCI-compatible device and
In addition to attributes specified for correct operation, additional features are also specified to maximize performance of the
overall system, for example read prefetch support and bandwidth allocation.
Prefetching PCI delayed reads provides for more efficient burst transactions. Bandwidth allocation is done in a chain to make
sure that downstream devices aren't starved of bandwidth. This is done using a dynamic insertion rate control mechanism that
approximates the bandwidth that a device would see using round robin arbitration on a shared bus.
The principal differences in the operation of PCI and HyperTransport buses are related to link initialization and interrupt signaling.
Link initialization including negotiation of frequency and width of the link is performed before PCI enumeration occurs and can be
handled entirely by a small preamble to the standard BIOS. On HyperTransport links, interrupt information is transmitted in
packets rather than via wires as in PCI. Interrupt packets carry opaque information that is generated by devices and interpreted
by the host bridge.
Designing with HyperTransport
We've examined technologies that can be used for systems designed with HyperTransport technology. Another important
consideration for the system designer is the building blocks that are available to use. Three companies, Broadcom, PMC-Sierra,
and Sandcraft have announced MIPS instruction set processors with native HyperTransport host interfaces and NVIDIA has
announced a core logic chip set with a native HyperTransport host interface that can be used with 86 processors.
Both FPGA and ASIC vendors have been licensed versions of a HyperTransport PHY. This, along with available HyperTransport
cores, allows designers to easily incorporate HyperTransport links into their own ASIC and FPGA designs. This would allow, for
example, a designer to create a custom acceleration engine that connects directly to a HyperTransport link without requiring a
PCI bridge, lowering both part counts and cost.
When does it make sense to look at HyperTransport technology for a design? Sample parts of various devices are available today
from multiple vendors and interoperability has been successfully demonstrated. Board design efforts have been active since last
year and the number of available parts has been steadily increasing. While HyperTransport technology is relatively new, it's
already demonstrating its worth in real world applications, so it's not too early to evaluate it for next generation designs,
particularly for demanding applications.
A broad variety of additional components with native HyperTransport interfaces are under development. Example devices that
are known to be under development and which would be useful to the communication systems designer include: PCI-X bridges,
Gigabit Ethernet controllers, InfiniBand communication adapters, encryption/decryption engines, and packet classification
In addition to the obvious extensions to higher speed and wider links as well as bridges to additional interconnects such as
InfiniBand, there is interest among HyperTransport Consortium members in defining both board-to-motherboard connectors as
well backplane connectors that support the HyperTransport fabric. Developments in this space will extend the standardization
from the electrical and protocol level to the physical form factor space. As system requirements become more and more
challenging, designers need new tools in their bag of tricks to create successful designs. Low pin count, high speed scaleable
switched links such as those using HyperTransport technology offer designers a neat solution to a set of thorny issues in the
design of next-generation communication platforms.
An important factor in selecting a technology base for new designs is that it be a stable, open, and widely supported standard.
The HyperTransport Consortium is an open industry organization that was created to support the development and adoption of
the HyperTransport I/O link specification. Promoters include Cisco, PMC-Sierra, API NetWorks, Sun Microsystems, Apple, and
AMD. Over 180 companies have already licensed the HyperTransport specification with a number of them having announced
products based on the specification already. The consortium has an Executive Committee, which appoints technical and working
groups. Membership is open, with participating companies labeled either as contributors or adopters, based on how actively they
are involved in technical and publicizing activities. For more information visit: http://www.hypertransport.org
Tom Morris is a technical evangelist and director of product strategy at API NetWorks. He has over 20 years of engineering experience in the communication and computer industries. Tom can be reached at firstname.lastname@example.org.