Core Internet routers forward more than 270 million packets per second, induce an average latency of less than 20 microseconds and produce an average jitter of less than 10 microseconds. But independent of core switching speeds, congestion still remains an obstacle to delivering quality of service (QoS). So core routers rely on the access networks to determine each packet's QoS needs and make the appropriate bit settings in each packet-for example, setting the IP precedence bit. The core routers use that information to identify the service class and set up a dedicated queue for voice-over-Internet Protocol (VoIP), making the jitter minimal and the delay consistent in the backbone.
The focus of delivering QoS turns from the core to the access networks-in particular, the last-mile systems. How do we deliver circuit-switched quality with packet efficiencies in shared, broadband environments such as cable, passive optical networks (PONs) and wireless?
The industry's ability to solve this problem will determine whether broadband-access systems will be profitable. For most service providers, profit is determined not only by deployment of cost-effective systems, but also by the ability to deliver differentiated services (DiffServ), service-level agreements and ancillary services that provide additional revenue, while at the same time supporting extended levels of oversubscription.
In the last mile, the goal is to deliver a genuine broadband-user experience, and that means more than just bandwidth. Networking vendors and service providers like to talk about physical-layer signaling rates, because they're easy to understand and verify, and because they tend to inflate system performance. But what really matters is the subscriber's perception of the system's performance. It's delay that the subscriber actually experiences: how quickly a Web page displays; how a voice call sounds; and how the audio sounds and video looks during interactive gaming.
Point to multipoint
To meet this broadband goal, access systems such as cable, wireless broadband and PONs must address some difficult challenges:
It's broadband, not narrowband. Although the TCP/IP networking protocols have existed for more than 30 years, and high-capacity transmission has been in use for a similar amount of time, it was only recently that the two have been combined in broadband shared-access systems. Previously, most high-capacity transmission networks were used to transport large amounts of narrowband voice and data. Most TCP/IP traffic over WANs was narrowband data with little or no real-time content.
In many cases-cable modems, sub-11-GHz wireless and PONs, for example-broadband must operate in a point-to-multipoint topology. This is also an economic constraint. Where point-to-point physical facilities already exist, as in voice pairs used for xDSL, it is practical to use that topology. But cable systems are inherently point-to-multipoint, and wireless and fiber systems are only cost-effective in point-to-multipoint topologies. Transporting broadband data across this configuration poses a substantial internetworking challenge. A point-to-multipoint topology makes it difficult to deliver just bandwidth efficiency, let alone provide deterministic access. How is contention eliminated, how are jitter and delay budgets met and how is control overhead reduced, while maximizing link utilization and oversubscription?
In some cases, as with cable modems and sub-11-GHz wireless, for instance, the physical layer is noisy. The system must address the problem of how to efficiently and effectively transport broadband TCP/IP traffic, where the transport medium (RF for wireless and cable, for example) may be impaired to a certain degree. This impairment-commonly termed link-layer impairment-can be the result of the noisy and interference-prone nature of radio transmissions.
There's an unpredictable mix of real-time and nonreal-time traffic. This is primarily an economic constraint. If money didn't matter we would handle voice on the circuit-switched public, switched telephone network, broadcast video on cable or satellite, and nonreal-time data on broadband packet-based systems like today's Internet. But mixing information types on a single packet network reduces infrastructure costs enormously. This presents challenges, though: Packets vary in length, so latency cannot be easily guaranteed when the media is being shared. Packets support multiple protocols and payloads, with widely varying bandwidth and delay requirements. And packet traffic is not well behaved, but bursty; counterintuitively, the distribution of packet arrivals is not Poisson, as with voice traffic, but "self-similar," meaning that the bursty nature of the traffic remains no matter how many users are in the system. The access systems must be engineered to handle unpredictable fluctuations in the degree of burstiness.
Before we consider the technology elements that must come together to meet these challenges, let's note two myths that surround the concept of QoS:
QoS can be added to any system. In fact, the challenges described above are very difficult to reconcile, and fundamental architectural elements of the system must be designed with them clearly in mind. It is possible to add VoIP support to almost any broadband system, just as it's possible to put pontoons on almost any car-but it won't necessarily work well.
QoS can be provided by simply adding DiffServ or multiprotocol label switching. DiffServ and MPLS manage IP flows through congested links and devices in the core, but do not address many of the transport issues that disrupt broadband IP flows in the access portion of the network. Here, shared access to the underlying physical layer, mediated by the media-access control, or MAC, layer software, and problems that are intrinsic to the access physical media itself, all need to be addressed, and these QoS signaling protocols are of no help. This has clearly been recognized by the standards community in the development of CableLabs' Docsis 1.1, IEEE 802.11e and IEEE 802.16.
What are the essential elements required for delivering QoS in point-to-multipoint broadband systems? First, a reliable physical layer is crucial for TCP/IP performance and a low-latency physical layer is required for real-time traffic. For example, radio transmissions tend to have intrinsically high bit-error rates even in the best environmental conditions. While the link utilization may be high, subscribers' service quality declines due to the packet loss. This was an important lesson from the early days of ATM, when signaling rates of 155 Mbits/second were in sharp contrast to very poor user experiences, due to slow segmentation and reassembly (SAR) chips, nonoptimal buffering schemes and a poor understanding of the implications of running TCP/IP across ATM. TCP/IP and packet loss due to physical-layer limitations don't mix. TCP thinks packet loss is due to congestion, rather than link-layer impairments, and responds incorrectly. Inherent in the physical-layer design must be the ability, under all conditions, to provide a bounded delay and broadband bandwidth to each subscriber.
Another important element is a highly deterministic, efficient MAC-layer protocol. If hundreds of customer-premise units are allowed to access the upstream bandwidth at will, it becomes almost impossible to ensure that real-time flows receive bandwidth at the moments they're required in order to meet jitter and latency budgets. A solution is to make the system operate as a virtual single-state machine, in which all transmissions from the various customer-premises equipment sites (CPEs) are controlled centrally. The MAC protocol must be controlled by a protocol-aware scheduler. This is a key difference between a real-time architecture and traditional LANs. LAN protocols are designed for simplicity and cannot support real-time flows without wasting bandwidth.
Also needed is an upper-layer protocol analysis function. Until the system knows that one flow is real-time voice, while another is a file transfer, it can't give each the characteristics it needs.
For greater control in assigning QoS parameters to different types of traffic, a service-flow model is required. This model enables per-subscriber support of data, voice and video traffic. With per-flow queuing, each subscriber's flow is shielded from the bandwidth demands of others and is provided with intelligent allocation and control of the system resources.
The most difficult-and crucial-function to implement is a protocol-aware scheduler. Once the system learns the upper-layer protocol operating on each flow and matches those protocols with the service-agreement parameters, it must allocate and schedule system resources and physical transport media. That means balancing the needs of hundreds or thousands of flows, in real-time, many frames in advance.
File transfer packets can be very large, while real-time packets are usually quite small. If a small packet comes along a fraction of a microsecond after a big packet has started transmission, the small packet will have to wait, which may make it impossible for it to meet jitter and latency requirements. In fast packet multiplexing, large packets are segmented into small chunks before being sent and are then reassembled at the other end.
Multidimensional subscriber profiles are also important. It will be difficult for an operator to survive without differentiated pricing, since costs to support widely varying service demands will themselves differ.