Voice-over-packet technology converts narrowband voice, fax and data traffic from the circuit-switched format used in telephone and wireless cellular networks to packets that can travel over next-generation Internet Protocol (IP) or asynchronous transfer mode (ATM) optical networks.
Over the past 30 years, two disparate networks have developed independently to serve two distinct needs: the digital Public Switched Telephone Network (PSTN) for legacy voice traffic and the packet-based Internet for data traffic. Until recently, carriers and service providers focused on delivering either voice service over the PSTN or data service over a packet network. However, after governmental deregulation of the U.S. telecommunications market in 1996, competition has forced them to offer subscribers multiple services over a single connection.
Using a single, converged network to move narrowband voice, fax and data traffic from circuit-switched networks to packet-based networks lets carriers and service providers take advantage of the efficiency, flexibility and ubiquity of packet networks. Service providers can significantly expand both the number of customers and the variety of differentiated voice, video and data services over a single broadband connection at a lower cost.
Lower overall cost is the main reason carriers and service providers deploy converged networks. However, as revenues decline from voice transport, they must derive new revenue from other services. Initially, carriers and service providers can offset declining revenue by transporting data, video and other traffic over a converged network. But the sustaining value proposition of convergence is the other services it can deliver. Today these services include caller ID, call waiting, three-way calling and other custom calling features that already produce more revenue than simply transporting bits of traffic. Enhanced features such as interactive voice response, unified messaging, automatic speech recognition, personal dial prompt and capabilities not yet envisioned are the services that will drive revenue for years to come.
See related chart
To meet the demands of carrier-grade applications, a multiservice switch solution for the converged network must deliver a mix of high-density, programmability, low power and scalability. A solution must provide the highest density in the smallest space and form factor. It must fit in the existing rack space while providing even greater functionality than existing equipment. Carriers need flexible solutions so they can quickly add new features and support new standards without installing new equipment. A flexible, programmable solution that protects their investment is especially important when the trend for equipment is toward less capital investment and longer depreciation periods. Many central offices have extensive power supply and cooling systems.
Performance improvement cannot come at the expense of increased power consumption, more heat dissipation or having to rewire COs. As CO systems upgrade to optical-transport systems to increase subscriber capacity and bandwidth requirements, the performance of switches must scale significantly to keep up with the performance curve of optical technologies. To mitigate technology obsolescence and extend the equipment's time-in-market, scalability is imperative.
Equipment suppliers must deliver complete yet flexible solutions so service providers can support multiple protocols and offer multiple services. With time-to-market windows constantly accelerating, equipment suppliers must continuously deliver more features, sooner.
To be successful, an equipment supplier must deliver a differentiated solution to carriers quickly-or lose the market to faster competitors. For equipment suppliers, the main obstacle is the need to use existing technologies not designed specifically to solve the technical issues associated with network convergence. What equipment suppliers need is focused technology that helps them create the best solutions for the service-driven, next-generation network.
For network convergence, a next-generation multiservice switch must perform full interworking functions across a range of protocols that include time-division multiplexing (TDM), IP and ATM. Such switches must perform the following functions: ingress traffic termination and conversion, voice and data processing and packetization, egress conversion and transmission, signaling, and overall system control and management. To optimize performance and reliability in the high-density systems used in converged networks, those functions are partitioned and implemented independently.
A typical voice gateway system integrates multiple voice-processing cards, ingress and egress voice-processing cards and system-control cards. The voice-processing cards perform the TDM-to-packet internetworking functions that involve digital signal processing (DSP) functions on the payload data, followed by packetization, header processing and aggregation to create a high-speed packet stream.
The voice-processing card functionality can be split into control and data plane functions that have different requirements from a gateway designer's perspective.
Control plane functions include board and device management, command interpretation, call control and signaling conversion, and messaging to call-management servers. Control plane functions use complex, highly differentiated software that equipment suppliers typically implement on high-performance, general-purpose processors.
Data plane functions provided by the bearer channel (which carries all voice and data traffic) include all TDM-to-packet processing functions: DSP, packet processing and header processing. In contrast to control plane functions, data plane functions dominate a voice-processing card's cost, density and use of real estate and power.
The bearer channel's DSP voice-processing functions include network echo cancellation (128-ms tail) and voice compression/decompression (G.7xx) and silence suppression. The DSP telephony functions include the processing of in-band signaling tones such as DTMF, MFR1/R2 and call progress tones. All those functions are compute-intensive, and since they must be performed for numerous simultaneous active calls or sessions, they lend themselves to vectorization and multiprocessing. Currently, most vectorization and multiprocessing is performed by general-purpose DSP devices with low channel densities such as for T1 capacity of 24 channels. Because general-purpose DSP devices are not optimized for high-channel-count infrastructure applications, the result is high-cost, low-capacity voice-processing card designs.
The bearer channel's packet-processing functions include packetization for multiple protocols (such as RTP/UDP/IP, ATM AAL2/AAL5) and network-related QoS functions such as managing the jitter buffer, recovering lost packets, generating statistics and aggregating protocol data units (PDUs) from channelized DSP processing. Such packet-processing functions are memory and control intensive in that they perform a range of bit-manipulation operations in contrast to the mathematical calculations performed by the DSP functions. Most packet-processing functions are performed either by an array of high-end processors (resulting in high power but high costs) or by a custom device such as a field-programmable gate array (resulting in hardware inflexibility). As voice-processing card densities increase, these packet-processing methods are difficult to scale.
An optimal voice-processing card architecture will account for the nature of data processing and memory requirements for different functions by partitioning processing, maximizing channel density and minimizing power consumption. As voice-processing card densities increase from 336 channels to 2,016 channels (OC-3) and 8,064 channels (OC-12), an optimal voice-processing card architecture will ensure scalability by partitioning both the control plane and management functions to meet the demands of the next-generation network.
The architecture of a high-density channel-processing card (OC-3 capacity and beyond) must consider the performance and scalability of DSP devices, general-purpose processors and network processors to achieve capacity, cost and power objectives.
One important aspect of DSP architecture is the design of data plane functions. Two design approaches are possible: Integrate the DSP and packet processor on the same device; or optimize the DSP and packet-processing functions on separate devices. The first design approach combines both the math intensive DSP functions and the memory intensive packet-processor functions on the same die. Because those two functions require different processing structures, combining them leads either to including excessive memory on the device or to providing an external memory device with each DSP. Integration of these two functions therefore significantly increases both the card area and power requirements. While the approach might work for lower-density equipment, the same approach does not provide a high-density, scalable solution.
The second design approach-using separate devices-builds optimized devices for both the DSP and packet-processing functions with processing engines and memory hierarchies that match an application's requirements. Such a voice-processing card design integrates a common aggregation engine connected to the DSP devices that generate the PDU traffic. A card with this design has large external memories for packet buffering and streaming over the packet interface to the switch fabric or packet backplanes. By aggregating memories into an efficient and common memory store, this approach reduces by over 50 percent the memory required on the DSP, maximizing the channels per unit area for DSP functions (at least 2x over integrated DSP). By maximizing the channel density per square inch of area and per watt of power, the result is an optimized card design.
A second important aspect of DSP architecture is the control and management of DSP devices. Since older-generation voice-processing cards use very low-density, general-purpose DSPs (1 to 12 channels), a general-purpose processor could control and manage call setup, teardown, channel assignment and statistics gathering. However, as DSP performance improves by an order of magnitude (10 to 20x higher), control of DSPs will result in severe system bottlenecks unless control functions are distributed within each DSP device. Thus, an on-chip control processor for managing and allocating DSP control tasks can alleviate the bottleneck and produce scalability.
An optimized DSP architecture design must minimize the processing power (or MHz) per channel. Processing power can be minimized by deploying a programmable pool of processor engines for incoming call streams. Programmability allows algorithm upgrades in the field, helping prevent premature equipment obsolescence. Memory (or Mbits) per channel should also be minimized. Memory requirements can be minimized by providing a hierarchical memory on the device (instead of using a simple, flat memory that can result in a large portion of the die area holding high-speed DSP data).
An optimized packet and aggregation processor should interface seamlessly to an array of DSP devices that push and pull packets to and from DSP memories, producing a single packet stream without intervention from a board-control processor. The solution should support interfaces to transport IP packets and ATM cells to switch fabric or Ethernet backplane links, yielding a multiprotocol, unified design.
Interfaces to a common external memory pool store and forward packets or cells to the backplane or the DSP devices. It's also important for the system to manage jitter and priority queuing of packets or PDUs to the DSP devices. Support should include a pool of optimized, programmable packet engines to perform packetization and header-processing functions. Moreover, the system should manage control plane interactions with the board-control processor, using an on-chip control processor for a scalable solution.
Finally, programmability of the DSP and packet-processor engines is best achieved through instruction-set optimizations for a given application. Pursuing programmability through other reconfigurable techniques results in higher power dissipation as well as increasing the area required to perform gate-level switching control. A partitioned, optimized and programmable DSP and packet-processor approach would lead to a density improvement of 3 to 4x in terms of channels per unit area as well as channels per unit power. This partitioned approach allows for rapid scaling.