Interest in developing AdvancedTCA systems is starting to grow in the communication sector. The modularity and throughput delivered by these platforms is starting to gain attention in the sector. And, with the communication market still getting its footing back, many are eyeing ATCA as the open architecture needed to move the sector forward, especially in the wireless and broadband arenas.
But, being a new technology ATCA brings its own set of technology choices. One of the biggest is deciding what architecture to use for connecting ATCA backplanes.
Currently, the PICMG committee has established several backplane connectivity methods, including Ethernet, InfiniBand, and the Advanced Switching Interconnect (ASI). While each has its own pros and cons, this article will examine how ASI can be used to connect backplanes, looking at interconnect issues, topology, and the impact on bandwidth are explored. Several design guidelines will also be presented that designers should follow when designing ASI-enabled ATCA systems.
Before looking closely at the impact of ASI on ATCA designs, let's first take a close look at the ATCA architecture and the PICMG 3.0 specifications, which help define this architecture.
Several years ago, the PICMG group began an effort to develop standardized board and chassis form factor specifications for building ATCA systems. The ATCA specs are included in the PICMG 3.0 specification efforts.
PICMG 3.0 provides specifications for electromechanical issues, interconnect topology, and shelf management for modular shelves with high scalability and availability. Subsequent specs in the PICMG 3.x series define standards for different kinds of protocols. For example, the PICMG 3.4 is specifically defined for PCI-Express and ASI boards to be used with PICMG 3.0 backplane.
As defined under the PICMG 3.0 spec, the ATCA backplane has a maximum of 16 slots. Each slot contains three connector zones to interface with ATCA cards. Zone 1 is for power and system management. Zone 3 is for rear IO access. Zone 2 is the primary data transport interface which contains 5 ZD connectors for four different sub-interfaces: base interface, fabric interface, update channel interface, and synchronization clock interface.
The base interface consists of a 10/100/1000BASE-T Ethernet interface to accommodate legacy products. The base interface must be dual-star topology as defined in PICMG 3.0.
The fabric interface is the main channel through which the serial data stream passes. PICMG 3.4 defines the PCI Express signals to be used by an ASI fabric interface. Each connector in zone 2 supports 40 signal pairs and 120 pairs are available for the Fabric Interface. It provides a maximum four PCI Express links per channel, i.e. 10 Gbit/s per channel. Unused receive pins need be properly terminated with 100-ohm +/-10 percent resistors.
For the star and mesh topology, three corresponding board types are used: hub board, node board, and mesh enabled board. ATCA also supports an E-keying mechanism which allows the system management to confirm connection between the backplane and add-in cards.
A diagram illustrating the connection between ATCA backplane and the PICMG 3.4 card is shown in Figure 1.
Figure 1: Diagram showing an ATCA backplane and board interface.
Going forward, this article will mainly examine ASI application designs in the ATCA environment from a topological and interconnect point of view. There are many rich features of ATCA such as shelf management that will not be discussed in detail in this article. The focus here will be on architectural design and layout considerations.
Architectural Design Considerations
One major goal of the ASI specification is to enable a globally flat address for star and mesh topologies. The PICMG 3.4 sets the design rules and guidelines for the implementation of such topologies. Figure 2 shows both the dual-start and full-mesh topology defined by the PICMG 3.4 spec.
Figure 2: Diagram showing the dual-star and full-mesh topologies defined by the PICMG 3.4 spec.
A dual-star topology is used for carrier-grade applications with high-availability requirements. It has the following features:
- Redundant switch increases system availability, eliminating single point of failure.
- Link between switches facilitates coordination and failover.
- Each node supports redundant links, one to each switch.
- Number of routed traces remains relatively low keeping the backplane cost low.
As defined in the PICMG specs, the dual-star topology requires two dedicated hub slots, logical slot 1 and logical slot 2, for hub boards. All node cards go into other slots. Nevertheless the mapping of logical slot to physical slot is not specified in the ATCA specifications and therefore the designer can choose the implementation that achieves optimal result. A channel connection example of dual-star with four blade cards using physical slot 01, 02, 15 and 16 is shown in Figure 3.
Figure 3: Example dual-star implementation.
In contrast to the dual-star approach, a full-mesh topology is usually used for smaller carrier-grade applications that have large data throughput requirements on each card. In this architecture, each intelligent node performs switching functions along with higher level services. The full mesh is highly redundant and its features include:
- Switching services and management services are distributed across system slots.
- Data rate for each node is not dependent on any other node. Data throughput capacity scales with each added node.
- All system slots are identical.
- Fabric links are inherently redundant.
In a full-mesh architecture, every node has a connection to every other node. This results in a higher trace density and connector pins count. Also the high cost resulting from large PCB layer count is a huge concern. So a full mesh configuration is preferable for small systems.
Due to the multi-processing nature of ASI, an ASI fabric can be freely added or removed from an existing fabric group including implementations that consists of multi-chassis systems. Distributed multi-chassis designs can be used with integrated expandable switching fabrics to provide non-blocking interconnection between multiple expansion chassis. Specific architecture and implementation can significantly impact the overall scalability and flexibility of the system.
Current multi-chassis architectures include: star and matrix. Basically they are elevated versions of the star and mesh topology we just talked about at chassis level.
Star chassis topology uses central switch to aggregate multiple smaller leaf nodes while the matrix chassis topology is a scalable chassis matrix consisting of switching elements. An example of multi-chassis system model might be a multiple bladed service chassis attached through a separate interconnection chassis. The insertion and removal of the service chassis will need to be handled by fabric management software.
When designing an ASI backplane, system designers will experience some implementation issues at the PCB level. These issues include trace routing, single chassis layout, interconnection, cabling, and shelf management. Let's look at these five issues in more detail.
1. Trace Routing
The large number of traces going between the backplane and add-in boards, especially in a full-mesh system, poses a problem to the board designer who has to route the traces with consideration for signal integrity and simplicity. Several of PCI Express' features help solve this problem. For example, PCI Express provides a polarity inversion capability that help to ease the crisscrossing of the differential pairs in each lane. PCI Express also supports the re-ordering of the lanes, making it possible to match the lanes of the two devices without crossing the wires, therefore further simplifies the backplane design.
2. Single Chassis Layout
Since ASI and PCI Express have identical physical and data link layers, ASI shares the same electrical requirements as PCI Express. For links that traverse across the backplane, many factors need be taken into consideration by board designers to ensure the signal quality.
A typical intra-chassis data path includes connections from transmitters to board traces, board traces to backplane connectors, backplane connectors to backplane traces, and board trace to receiver. Figure 4 shows such a path between two ATCA cards across the backplane.
Figure 4: Diagram showing a signal path within a chassis.
As Figure 4 indicates, in an ASI-enabled ATCA backplane, signal travels through a serially connected path of different elements. Each element can contribute to the degradation of the signal if not properly managed.
First let's look at some of the electrical requirements of ASI relative to the backplane design:
- Loss: A lot of factors such as trace length/width, vias and connectors can cause the differential output voltage attenuation. Total loss allowed on interconnect by ASI is 13.2 dB.
- Jitter: Includes data dependent and random jitter. Total jitter allowed is 0.3UI.
- Link characteristic link impedance: Nominal 100 ohm with a tolerance of 15percent for differential; 50 ohm for single-ended DC common mode impedance.
There are several factors along the signal path that can cause the signal quality loss. These include connectors, via holes, coupling between traces, coupling between via holes, internal reflections, and radiation.
In an ASI-enabled ATCA architecture, system and board designers can take several steps to minimize the effect of factors that contribute to signal degradation. These steps include:
- Board material: ASI is cost-optimized for four-layer FR-4 design. At high bandwidth such as 2.5 Gbit/s, FR-4 can deliver a satisfactory quality signal and is therefore a preferred solution due to the cost.
- Balancing the backplane thickness and trace width: A thin backplane is generally desired due to cost and mechanical considerations. However, thin backplanes require narrow traces to maintain a controlled impedance and the narrow traces have a higher loss than wide traces. So, designers need to carefully weigh their decision of board thickness versus signal loss.
- Connectors: Use connectors designed and characterized at the highest data frequency. Special connectors designed for high-speed data transfer such as controlled impedance connectors would be a good choice.
- Differential trace layout: Use edge-coupled microstripe trace routing for differential signal pairs on a four-layer FR-4 stack.
- Crosstalk: Keep the spacing between signal pairs to be at least three times of the differential pair trace spacing to minimize crosstalk. Avoid running channels in parallel for a long distance.
- Jitter: Carefully match the trace length on differential pairs to reduce jitter.
- Vias: Each via may contribute 0.5 to 1 dB to link loss budget. Try to avoid vias and route the traces using the same layer. When vias are necessary, limit via numbers to a minimum. On the add-in card, put one via near the breakout section and one at the edge finger. Using small vias which will bring low serial impedance.
- Bends: Avoid signal discontinuities, use an arc or 45-degree bend instead of a sharp 90-degree turn.
- AC coupling: AC coupling capacitors ranging between 75 and 200 nF must be placed close to the transmitter on each lane's differential signal pair.
- Ground-plane referencing: Avoid signal pair reference discontinuities such as power splits and voids.
3. Multiple-Chassis Interconnection
For inter-chassis ASI connections, in addition to following the rules listed above, the designer needs to reserve more room in the link budget since now there is more loss introduced by the extra board trace, cable, and cable connectors as shown in Figure 5.
Figure 5: Diagram showing an intra-chassis signal path
Another issue that may arise in a multi-chassis environment for non-AC-coupled systems is the possibility that for different chassis have mismatched digital grounds. Consequently the difference will affect the data link performance adversely if it is not dealt with properly.
The ASI spec requires all the transmitters be AC coupled so that the DC common mode voltage sharing between the devices in different chassis will be not be a factor and each device can operate on its own DC common voltage without being affected by the other device.
Fiber-optic cable remains to be the primary means of inter-chassis connection for high-speed data transfer. It can deliver high quality signal safely and reliably over very long distances.
However, the cost of using fiber-optic cable is high due to various factors such as connector material, the electro-optic (EO) modules and adapter cards. Recent development in improved production techniques, lower prices for connector materials and adapter cards makes it more attractive.
For distances less than 20 m, copper cabling can be a good candidate for ASI backplane implementation. Without the necessity for EO conversion modules, copper cable proves to be a cost effective solution. With other techniques like passive equalization, pre-emphasis, and adaptive equalization, copper cable reach can be extended even further.
5. Shelf Management
ATCA defines a distributed management model on three levels: board, shelf and system level. The ATCA chassis has its zone 1 dedicated for shelf management communication. The intelligent platform management interface (IPMI) provides the control interface between the system management and the PCI-Express implementation.
On an ASI-enabled ATCA backplane, an intelligent platform management bus (IPMB) is implemented on backplane for management between shelf manager and IPMI controllers. An IP-based network interface is implemented on the shelf manager for communication to the system management applications. The ASI-enabled ACTA system can use this built-in management channel for system management.
A cost-saving alternative would be using the ASI integrated control virtual channels for such task. This will reduce cost, reduce power and increase reliability compared to using a separate management fabric.
In this article, we focused mainly on the hardware implementation side of an ASI-enabled ATCA design. As the article showed, ASI provides a nice backplane option for an ATCA design. But, it also brings some implementation issues. Using the steps described above, designers can solve those issues and develop an effective ASI-enabled ATCA architecture.
Advanced Switching Core Specification 1.0, ASI-SIG
PCI-SIG Base Specification 1.0, PCI-SIG
PICMG 3.0 Specification, PICMG
PICMG 3.4 Specification, PICMG
High-Speed Digital Design, Howard Johnson et al.
High-Speed Link Design and Simulation, DesignCon 2002
Board Design Issues for PCI Express Interconnect, Intel
About the Author
Zhijian Hua is an applications engineer in Vitesse Semiconductor's Intelligent Switch Fabric Group, Vitesse Semiconductor Corp. Zhijian holds a Master's degree in electrical engineering from Purdue University and can be reached at firstname.lastname@example.org.