Editor's Note: Like many folks, I use Ethernet in one form or another every day without actually having much of a clue as to how it works. Of course, the fact that FPGAs are increasing being used in Ethernet-based systems means that it would be useful for me to discover more. Hence, I was delighted to be presented with the opportunity to (a) read this two-part article and (b) share it with you.
See also Part 1
In part one of this two-part mini-series, we reviewed the history of advances in Ethernet to support multimedia applications. Now, in part two, the focus is on the details of the three components of audio/video bridging (AVB) and how these components work in unison to enable high-bandwidth multimedia applications with guaranteed delay and jitter.
AVB specifications overview
The audio/video bridging (AVB) system relies on three specifications and also makes use of the existing IEEE 802.1 and 802.3 standards. The first step is establishing the AVB cloud. Streaming services are provided within this cloud and there are latency and jitter constraints. Establishing the AVB cloud utilizes the existing 802.1 link layer discovery protocol (LLDP) standard with minor enhancements and 802.3 capabilities.
The system then makes use of precise time synchronization to establish the sync cycle. This sync cycle forms the basis of data sourcing at stream-talkers; bridges use this sync to forward streaming traffic. This sourcing, however, is not enough, as the system could still be oversubscribed (that is, having more traffic sourced than the slowest link could carry), causing excessive delay and jitter. To overcome this potential problem, the stream reservation protocol (SRP) reserves resources required for the stream. This reservation isolates the streaming traffic from bursty TCP traffic and other events in the system.
Even with the SRP and synchronized traffic sources in place, bridges are prone to accumulate jitter over several hops. This jitter is caused by contention with asynchronous traffic at each hop and accumulates over each hop in the traffic path. To overcome the jitter accumulation problem, this article presents a pseudo-synchronous data forwarding model for bridges.
AVB cloud establishment
To establish an Ethernet AVB cloud, use a combination of 802.3 link capabilities, a small enhancement to the 802.1AB LLDP, and link delay measurements completed by IEEE 802.1AS. For a device to be part of an AVB cloud, it must meet the following three requirements:
- The link between peers must be point-to-point full duplex. This is enforced through Ethernet auto-negotiation. Auto-negotiation is specified in the IEEE 802.3 specifications. Auto-negotiation is performed on link initialization, either under control of the configuration manager or on request from the link partner.
Auto-negotiation provides a mechanism for a device to advertise all possible modes of operation that it supports to another device at the remote end of a link segment, and to receive information on modes that the other device supports. This enables the devices at both ends of a link segment to reject the use of operational modes not shared by both devices. Where more than one common mode exists between the two devices, they use a priority resolution table to decide on a single mode of operation.
Auto-negotiation is specified as an option for 10Base-T, 100Base-TX and 100Base-T4, but it is required for 100Base-T2 and 1000Base-T implementations.
- The peer device must be AVB-capable. This is enforced using 802.1 LLDP with new type, length, and value (TLV). The 802.1 LLDP specifications don't support any such protocol data unit (PDU) at this time, so in order to enforce this requirement, a new TLV was defined. This will require an enhancement to the 802.1 LLDP.
The 802.1 LLDP specifies a mechanism for stations attached through an Ethernet link to advertise the capabilities provided by the device. Information distributed through LLDP is presented to a management entity, such as simple network management protocol (SNMP) or network management system (NMS) as a standard management information base (MIB).
LLDP is a one-way protocol – only transmitter or receiver or both (RX and TX) may be enabled. LLDP is independent of the bridge port state established by spanning-tree protocol (STP); that is, a blocked port can also exchange LLDP if enabled. The LLDP data unit (LLDPDU) is a sequence of short, variable-length information elements known as TLVs.
- There must be no transparent device that can introduce more traffic on the link, nor a device that adds extra delay on the link. This is enforced through link delay measurement using 802.1AS (explained in the next section). The next section provides a further description of this.
The AVB link establishment procedure follows this three-step sequence. After power-up or after the link auto-negotiation process is completed, the device must first ensure that the link is full duplex. After link negotiation, LLDPDUs are exchanged to ensure that both devices are AVB-capable. Then the 802.1AS precision time protocol (PTP) is used to ensure that no transparent devices exist and that link delay is less than 2 microseconds.
Precise time synchronization – 802.1AS
In the following topics, we will consider how a common synchronized time is maintained throughout the system as well as the various steps involved: selecting the common clock master from which to synchronize time, calculating the various link delays for transmit time calculations, synchronizing time with the clock master, and how to perform clock-rate adjustments.
PTP is provided in the AVB domain by IEEE 802.1AS. The purpose of time sync is to provide the common 125 µs cycle throughout the AVB cloud and also to provide a common time base so that source and destinations have a sense of how sampling and receiving times are related.
In this protocol, each device maintains its own time using a locally sourced clock. However, to maintain synchronization between equipment in the network, the device will synchronize from a single reference clock device known as a grandmaster. To achieve this goal, each device calculates its clock offset from the grandmaster and uses this offset to get the global network time. This happens every 10 ms. To reduce the clock drift (or wander) between 10 ms synchronization periods, the slave device adjusts its local clock frequency to match that of the grandmaster. This process is called syntonization.
As mentioned in the preceding section, the synchronization process starts with establishing a LAN-wide clock master called the grandmaster. The 802.1AS refers to a device capable of being a grandmaster or a slave as an ordinary clock, while bridges are called transparent clocks because of their pass-through nature.
To assist in grandmaster selection, each station is associated with a distinct preference value. The grandmaster is the station with the best preference value. The grandmaster selection process uses a protocol similar to the rapid spanning tree protocol (RSTP) algorithm specified in 802.1D. Each station sends out an announcement message that has the required preference information. The receiving stations compare this preference value and forward this to their neighbor if the observed preference value indicates a better grandmaster clock. Thus all the stations arrive at the same grandmaster.
Link delay measurement (Pdelay)
Fig 1 illustrates the mechanism to measure the propagation time on an 802.3 full-duplex wired link using Pdelay. This measurement is made by each port at the end of every 802.3 full-duplex wired links.
1. PDelay process timing diagram.
The process is initiated when a requesting port sends a Pdelay_req message on the link whose delay it wants to measure. The transmit time for this message is measured by taking the difference between the transmit and receive time stamps. The receiving port then transmits a Pdelay_resp frame and measures the transmit time for this. The actual link delay is an average of these two measured times.
The receive time stamp for the Pdelay_req frame and the transmit time stamp for the Pdelay_resp frame are communicated back to the requester for calculation by sending them in a Pdelay_Resp_Follow_Up frame.