# Improving performance, range, and reliability in 802.11n networks

The wireless IEEE 802.11n standard brings high-bandwidth capabilities to a wide variety of mainstream applications. With the ability to support live video streaming, 802.11n enables the home network without wires and all but eliminates the remaining barriers to ubiquitous wireless access in the workplace and public areas.

While the hooks to achieving this are in place within standard, the 802.11n standard also offers engineers a variety of options for sustaining signals to achieve higher effective throughput at greater distances. The key is to choose the correct options. Through the use of more advanced algorithms and techniques, developers can design 802.11n equipment that provides performance that is close to the theoretical maximum while still competitively balancing cost and complexity. This feature will describe the various options and where they fit.

The high-performance foundation of 802.11n is multiple input, multiple output (MIMO) technology which uses multiple transmit and receive antennas to create a virtual array of antennas that enable the transmission of concurrent data streams. However, while MIMO increases overall throughput, the MIMO signals can be perturbed, making it difficult to sustain high-performance wireless links. If signals cannot be sustained between the transmitter and receiver, noise and errors will affect performance.

For example, decoding errors, lack of phase alignment and marginal demodulation between different receivers contribute to higher bit error rates (BER), resulting in more retransmissions, wasted signal energy, and increased interference with other networks or equipment using the same spectrum.

The 802.11n specification outlines a variety of techniques to improve MIMO performance and efficiency, including spatial multiplexing, beamforming, and advanced coding schemes. By dynamically and intelligently directing transmitted energy in the most optimal fashion towards the receiver, energy can be concentrated so that the majority of it reaches the receiver rather than "polluting" the wireless environment and interfering with other wireless systems. Given the portability of many wireless devices, systems must also be able to continuously adapt to changing reflections.

With techniques such as low-density, parity-check codes (LDPC), maximum likelihood (ML) equalization and transmit beamforming (TxBF) engineers are able to substantially sustain signals to compensate for or avoid signal losses that negatively impact performance. When applied to real-world networks, signals that are able to achieve higher data rates over greater distances with increased reliability in a cost-effective fashion.

**LDPC: reducing decoding errors**

Reliability is crucial for performance. Because noise and errors manifest as incorrect bits in the data stream, if the receiver cannot resolve these bits, the transmitter must resend data, resulting not only in loss of effective bandwidth but increased latency that adversely affects the quality of real-time data such as streaming video or audio.

IEEE 802.11n has traditionally employed convolutional coding techniques to increase link reliability. Through its ability to detect and correct a limited number of bit errors in each packet, convolutional coding is a proven method for reducing the overall bit error rate (BER) of a channel. This improved reliability, however, comes at the cost of efficiency, which is measured in terms of the improvement in signal gain to the overhead imposed. A 1/2 rate code, for example, converts each bit of information into two bits to be transmitted across the link, thus consuming twice the bandwidth but achieving greater robustness.

LDPC has long been recognized as an efficient coding technique compared to traditional methods of convolutional coding. Using what is referred to as a belief propagation algorithm, LDPC employs an error-correction mechanism that approaches the Shannon limit. Until recently, the primary drawbacks with LDPC have been its large implementation footprint and iterative decoding process which is extremely computational intensive.

New advances in coding theory have made it practical for engineers to implement LDPC cost-effectively without the high cost and overhead associated with a full LDPC implementation. Using what is known as a layered-belief propagation algorithm (LBP), engineers are able to trade off computational complexity for lower cost and faster convergence. Compared to a full-belief propagation implementation, LBP offers smaller footprint, superior efficiency and higher performance at a lower cost.

Specifically, the efficiency of the LBP algorithm comes from its iterative approach. Accuracy " and therefore performance " is a factor of how many iterations of the algorithm are made. The number of feasible iterations, however, is constrained by channel throughput and latency. Performance is enhanced through the use of partial iterations which make optimal use of the decoding time available for each code word. This allows the system to further minimize convergence time within a given system's constraints. In addition, the small footprint of the LBP algorithm allows for scalable pipelining to increase performance based on available resources. The use of a dynamic scaling factor based on channel estimates also contributes to faster convergence.

Altogether, the efficiency of LBP-based LDPC decoding is substantial: measurement has shown about 2 dB increase in performance gain for the same overhead compared to conventional coding methods. The reduced footprint of an LBP implementation can also provide substantial cost savings.

**Maximum likelihood: more accurate demodulation**

Another area of performance improvement available to engineers is through advanced demodulation techniques for handling spatial multiplexing. As a result of the multiple transmit antennas required to implement MIMO technology, multiple data streams arrive at the receiving antennas overlaying each other. It becomes necessary for the MIMO equalizer to employ spatial multiplexing to separate the data streams from each other and restore them to their original independent signals. Conventional 802.11n systems use zero-forcing (ZF) MIMO equalization to implement spatial multiplexing. New maximum likelihood (ML) methods, however, offer greater accuracy.

Like the belief propagation algorithm used for LDPC, standard ML-based implementations are extremely computationally intensive and therefore not reasonable to employ. The complexity of an ML-based implementation for a 3x3 system with 64 QAM is cost prohibitive. However, like the LBP, engineers have the option of implementing a simplified maximum likelihood (SML) algorithm which provides much of the performance gain and accuracy of a full ML implementation but at a fraction of the cost.

An SML-based implementation for a 3x3 system consists of three parallel search engines, one for each stream supported (see Fig. 1) below. Each engine uses QR decomposition to separate the matrices representing the communication channel into their orthogonal and right triangular matrixes. In addition, an ML detector uses an intelligent searching mechanism to find dynamically candidates with minimum error distances.

Click on image to enlarge. |