It’s all about spectrum
Explosive growth in wireless data services is once again transforming an industry well-accustomed to change. The rapid growth in traffic of the 2000s, driven predominantly by voice subscriber additions, is now being followed by an even more dramatic increase in data traffic, as smart phone market penetration accelerates and a flourishing ecosystem of smartphone apps drives explosive growth in the amount of data consumed per subscriber.
To continue to support this growth, spectrum strategy is becoming an ever more crucial factor in determining a wireless operator's success. Spectrum strategy discussions usually center on opening new bands for terrestrial wireless services, but it should be emphasized that the core issue is providing sufficient and pervasive network throughput to keep pace with the acceleration in demand. This is a question not only of adequate spectrum, but also of maximizing spectral efficiency, i.e. to support continued growth in demand for data services, operators must increase spectrum holdings, and also maximize the throughput per unit of available bandwidth per unit of coverage area (the area spectral efficiency, typically measured in kbps/Hz/km2).
3- and 4G wireless data air interfaces have dramatically improved the theoretical spectral efficiency of wireless networks, to the point that LTE can exceed 80% of the Shannon limit in highly-idealized cases. However, even the most elaborate modulation schemes are susceptible to degradation due to interference. As any mobile service subscriber knows, the gap between real world performance and the theoretical ideal is often large. In many cases, this is due to the presence of interference.
Interference can originate from a myriad of sources (Table 1), but the impact to the end user is the same: erratic data throughput that comes nowhere close to the performance levels expected from a channel containing a thermal noise component alone. In essence, the net spectral efficiency of the channel is degraded by the presence of interference.
Protecting spectral efficiency by managing interference
Although radio access technologies have made remarkable advancements in the last decade, the predominant methods for dealing with interference remain mired in the 1950s: when an impairment is noted (often the first alert is a customer complaint), a crew is dispatched to identify the interference source by triangulation, and, if possible, make arrangements for it to be switched off. While it is obviously desirable for interference sources to be eliminated when possible, this process is labor-intensive, slow, and leaves customers exposed to the effects interference while the response team scrambles to identify the issue. For difficult to resolve interference problems, this process can last for months - or indefinitely. Worse yet, less severe interference problems, although common, are often undiagnosed, resulting in a degradation of the aggregate spectral efficiency of the network. Remarkably, interference causing an unloaded noise rise of as little as 3 dB can result in reductions of area spectral efficiency approaching 25%.
Fortunately, advances in adaptive RF digital signal processing (DSP) provide an avenue for implementing a more interference tolerant radio. By monitoring the RF signal for spectral distortions inconsistent with the expected signal, and adapting the channel in real time to compensate for the presence of any detected interference, the effects of impairments can be minimized or eliminated. Figure 1 illustrates the typical integration point for an external RF DSP. As is apparent from the diagram, the RF DSP can be thought of as an augmentation to the radio Rx path itself, that gives the radio the ability to adapt to the instantaneous RF environment.
RF DSPs: Under the hood
Interference can originate at arbitrary sources, so it can have arbitrary spectral signatures. However, since the spectral signature of the underlying air interface is known, departures from the expected power spectral density can be used to identify interference candidates1. Since RF power from multiple sources is present at the receiver, but only the characteristics of the desired signal are known, the interference components must be deduced by statistical methods. Hence, an adaptive system is comprised of two distinct components: a detection subsystem, capable of executing the required statistical analysis in real time, and a signal processing subsystem capable of changing the system’s frequency response in based on the decisions made in the detection process.
Figure 1: "Smart" radio RF path augmentation
A simplified block diagram of a dual-channel RF DSP is shown in Figure 2. An RF input is sampled by an ADC with dynamic range adequate to handle small signals (in the interference free scenario), as well as large interference powers. Detection algorithms then continuously monitor the characteristics of the input RF signal for the presence of interference. If any is detected, the system self-adapts its frequency response to provide the optimal signal-to-noise ratio at the output port, given the instantaneous interference-plus-noise signal present at the input. This optimized digital version of the input signal is then converted back to analog RF, and passed to the transceiver. This process, which may include filtering both within and outside the bandwidth allocated to the receiver’s air interface, is essentially conditioning the spectrum prior to demodulation at the radio.
Figure 2: Simplified RF DSP block diagram.