SONET/SDH transport networks have faced challenges in transporting data more efficiently and providing flexibility in provisioning bandwidth and new services to customers. However, Virtual concatenation, a recent advance in SONET/SDH framing, has made it possible for SONET to become a cost effective network in transporting data.
Virtual concatenation is a way of resizing fixed SONET containers into nonstandard SONET pipes based on the dynamic requirement of bandwidth of customers and sharing existing pipes and nodes. Take the example of a service provider aggregating Fast Ethernet customers over OC-48. In a traditional SONET network, an operator could at best aggregate 16 Fast Ethernet channels by provisioning an OC-3/STS3 pipe for each one of them. This wastes 50 Mb of bandwidth, assuming the customer is fully utilizing the Fast Ethernet link.
Virtual concatenation allows the resizing and creation of non-standard SONET containers of 100Mb with two STS1 pipes. This approach allows the aggregation of 24 customers and making the same network 50% more efficient.
Virtual concatenation also allows the transparent transport of multiple types of traffic in the same payload. A service provider can sell a fractional SONET bandwidth to a customer as and when needed by the customer without worrying about their traffic. In an oversubscribed network for example, a service provider can aggregate different types of data traffic from Fibre Channel, enterprises with partial Gigabit Ethernet traffic and so on and transport it on the same SONET frame (Figure 1).
Figure 1: OC 48 frame with oversubscribed traffic of different types.
When building equipment to support virtual concatenation, differential delay is an important factor to consider. SONET/SDH network employing virtual concatenation have some tough differential delay requirements that must be accounted for in switch and router designs. And that means designers need to pay very close attention to the design/implementation of the memory subsystem housed in equipment designs.
Different memory technologies can dramatically impact how well a SONET/SDH network supports virtual concatenation. One of the big memory choices designers must make is between using an SRAM or SDRAM in a system architecture. Both these memory options have their benefits and drawbacks in SONET architectures supporting virtual concatenation. Here's some tips for easing the choice.
What is differential delay?
Before diving into the pros and cons or each memory architecture, it's important to first understand differential delay. The best way to explain this concept is to recap "inverse multiplexing," which is used to transport a higher order signal over a number of lower order signals.
If, for example, a customer needs bandwidth that is more than T1 and less than T3, carriers would often bundle multiple T1 lines together and constrain the networks so that all of the T1s would follow exactly the same path and be treated in the same way. This would allow the carrier to achieve the virtual pipe of the non-SONET standard data rate. To meet these requirements the constraint on the network is to open in parallel and in one single operation several label switched paths (LSPs) at the same time, which may not be practical.
Operators using virtual concatenation (VC-n/STS/VT) will not follow the same path. To support Gigabit Ethernet over SONET in a virtually concatenated pipe, all a carrier will need to do is provision STS-3C/7v or STS-1/21v tributaries, which can be anywhere in the SONET payload.
In point-to-point communication all of these tributaries need not be transported together and through a switch they can follow different routes. As a matter of fact, the recently defined VC-n-Xv link capacity adjustment scheme (LCAS) suggests that the group of X VC-n signals is routed over at least two different routes.
Since SONET data structure is byte-interleaved in its all tributaries and all tributaries need not arrive at the same time on the receive side, packets can only be assembled for further parsing and forwarding only after receiving all the tributaries that comprise the virtual pipe. The difference in the arrival times at the receive end of a given virtual pipe/group is called differential delay.
For the lower order virtual concatenation the K4 byte of the packet overhead (POH) is used to transport the 32-b control code, which is repeated every 128 frames (16 ms). The first 5 bits of this code indicates the frame count and the sequence indicator is a 6-b value (bit 5 to 11). However in higher order (HO) virtual concatenation the H4 multiframe transports the control codes for re-sequencing and realignment.
To support differential delay and its compensation, the SONET frame receiver must (1) store the SPEs/ VCs; (2) uniquely identify them by looking at their H4 byte or K4 string (depending on HO or LO concatenation); (3) calculate the differential delay; and (4) compensate for it by realigning the SPEs/VCs (Figure 2).
Figure 2: Diagram showing the data flow associated with the differential delay handler.
What memory to choose?
The amount of storage technology required to support virtual concatenation in a system architecture depends on the maximum differential delay required. Although the G707 recommends a max of 256 ms, the typical requirement will be to support a maximum differential delay of +/- 16 ms. This means a maximum of 256 frames (each frame is 125 μs and therefore 32 ms means 256 frames) may have to be stored before realigning.
At OC-48 speeds, this translates to almost 75 Mb of storage, assuming the entire payload consists of virtually concatenated channels. At OC-192 speeds, this translates to almost 300 Mb of data storage.
The storage requirement may be slightly higher, depending on the amount of control information an engineer wants to store externally along with data. This could mean the use of multiple banks of high density RAMs to realize this storage capacity. For high speeds, higher density storage is required. This need may be met using SDRAMs. Let's look at the advantages of both SRAMs and SDRAMs.
One of the best ways to compare the performance of SRAMs and SDRAMS is by looking at bandwidth. Since the delay compensation involves both read and write operations of the data transferred from the SONET framer, a minimum of approximately 20 Gbps of bandwidth is required to support OC-192 speeds, assuming the entire payload consists of virtually concatenated channels. The minimum required bandwidth for OC-48 is 5 Gbps.
If an engineer uses SDRAM, additional bandwidth is needed to take care of refresh/pre-charge and other overhead cycles. A 20 to 25 percent bandwidth margin is typical.
On the contrary, using SRAM, makes a high bandwidth margin unnecessary. There are memories with zero wait states for back-to-back read-write operations that can minimize the overhead cycles. It may even be better to use quad-data-rate (QDR) RAMs to achieve high bandwidth.
A common technique to increase the bandwidth is to increase the bus width of the storage interface. So if a single memory device cannot achieve the required bandwidth, multiple devices can be cascaded to achieve the desired bandwidth. However, when the real estate on the board and the overall power consumption of the box are decisive factors in component selection, which they often are, minimizing the number of devices and loading on the address/control signals will achieve the desired result.
These days double-data-rate (DDR) SDRAMs are available that can achieve bus speeds of 266 MHz -- nearly double the speed of standard SDRAMs. Each DDR SDRAM can contribute to around 8 Gbps of bandwidth with a x32 interface.
On-chip storage is also a good way to evaluate SRAM vs. SDRAM performance. In a system providing virtual concatenation capabilities, the payload -- extracted from the SONET frame by the SONET receiver -- is byte-interleaved for various STS channels. Similarly, the data sent by the memory to the multichannel delineation engine (see Figure 2 above) has to be interleaved to some degree (not at the byte level). This allows delineation to happen for all the virtually concatenated channels in parallel; all channels are treated fairly based on their individual bandwidth.
Depending on the kind of memory used externally, the data for each such channel may have to be transferred in a burst. The data for each channel and for each frame, coming from the framer, gets accumulated and then is written into the memory. The burst size will be high for SDRAMs in order to keep the overhead cycles to a minimum. It is very low for SRAMs as the overhead cycles are minimal.
When a burst transfer is desired, the bytes need to be stored in an internal buffer until an appropriate burst size is reached. The burst size and the total number of supported channels determine the on-chip storage requirement.
The internal buffer makes sure that there is no data starvation at the framer, the memory interface, or the multi-channel delineation engine. The buffer also helps adjust the bus widths between the internally used datapath width and the memory bus width.
A typical storage requirement for OC-192 when using SDRAM is 1MB, assuming channels are at STS-1 granularity. For OC-48, the storage requirement can be a quarter less. When using SRAM, the storage requirement will be in the order of few tens of kilobits for OC-48 and few hundreds of kilobits for OC-192.
When SDRAMs are used, separate buffers are also required to store pointer information for each frame in order to do the alignment after differential delay calculation. Designers may decide to have this on-chip for easier handling, or may opt to make it exist along with the data in external buffers as control information.
For OC-48 solutions, it is best to choose 18-Mb (x36) non-blocking SRAM. Assuming an operating speed of 133 MHz, two such devices will be required in cascade to meet the 5-Gbps minimum bandwidth. Three such banks will be required to support +/- 16 ms of differential delay.
However, the choice of differential delay time is a very subjective decision by the equipment manufacturer and the network topology. For a choice of smaller differential delay, an engineer may also choose to have only one bank to support a differential delay of 8 to 14 ms, though the exact number depends on the utilized width of the memory bus. If the engineer uses the full width of 72 bits, a higher storage space can be accessed and hence a higher differential delay can be supported.
As an alternative solution, the designer may also select an 18-Mb QDR memory device. When using QDR, half the number of memory devices provide the same memory bus width, but they achieve half of the differential delay.
As discussed above, differential delay support requires high-density memories with high bandwidth. This accounts for a high pin count and power consumption. Designers may also select on-chip memory with whatever density may be accommodated, but at the cost of reducing the amount of differential delay supported.
In differential delay support scenarios, SDRAM requires both very complex logic implementation and high-density on-chip storage. Although an SDRAM provides lot of memory, it does not have adequate bandwidth, so multiple SDRAM devices need to exist in cascade.
SRAMs, on the other hand, bring a more elegant solution with smaller logic overheads. Thus, they are a better choice for design engineers enabling virtual concatenation capabilities in system/equipment architectures.
About the Author
Gopal Garg is a director in the WAN business unit at Cypress Semiconductor. He holds an EE and MBA degree from BITS India. And can be reached at email@example.com.
Somnath Paul is a staff design engineer in the WAN business unit of Cypress Semiconductor. He holds a B. Tech degree from the Indian Institute of Technology, Kharagpur, India, and can be reached at firstname.lastname@example.org.