The mobile Internet era is spurring development of multiple 4G technologies; not just 3GPP-LTE, but other 4G standards, including Mobile WiMax 802.16e and Japanese XG-PHS, are gaining global footprints. The proliferation of multiple legacy 2G and 3G technologies magnifies the need for development of multimode devices, including chip sets that can process a variety of broadband technologies while handling high throughput at low power consumption.
The cellular market’s fragmentation raises the need for a programmable platform; indeed, development of a powerful yet power-efficient programmable platform is not a new idea. Not until now, however, have such platforms been both technologically feasible and cost-effective.
This article presents the fundamental concepts of 4G technologies; describes the challenges in developing an efficient 4G user equipment (UE) solution and explains why software-defined radio technology is an excellent means for implementing it efficiently; and describes a unique SDR architecture allowing a programmable, powerful yet power-efficient implementation.
Fundamental concepts of 4G
All 4G technologies use orthogonal frequency-division multiplexing (OFDM) for transmit and modulation. As a result, close algorithmic and implementation challenges must be met when implementing the different 4G technologies.
In OFDM, multiple information elements (for example, QPSK, 16-QAM and 64-QAM symbols) are transmitted in parallel with an OFDM symbol using orthogonal carriers (typically called subcarriers). The carrier frequencies are chosen to minimize intercarrier interference (ICI) in the receiver at subcarrier frequencies (Figure 1).
Several consecutive OFDM symbols generate an OFDM frame. An OFDM frame is typically partitioned between downlink (DL) and uplink (UL) subframes, which are used for information exchange from basestation to mobile station and vice versa.
Two common methods used to achieve partitioning are time-domain duplexing (TDD), which allocates different time slots to the UL and DL, and frequency-domain duplexing (FDD), which assigns different frequency bands to the different subframes. In TDD, a gap is typically added between DL and UL to allow sufficient time for the mobile station to change its mode from reception to transmission and vice versa. In FDD, a gap is set between the DL frequency band and the UL frequency band, allowing proper filtering of the received band by each side of the communication link. Figure 2 illustrates TDD and FDD partitioning. The result is a two-dimensional (frequency/time) array used to deliver data from enhanced node B and multiple UEs.
Multiple-input/multiple-output (MIMO) schemes are widely used, allowing enhanced robustness of the received signal or, alternatively, higher throughput (using spatial multiplexing techniques).
In some technologies, such as LTE, a single-carrier scheme (SC-FDMA) is used in the UL to enhance transmitter peak-to-average power efficiency.
In practical wireless channels, conditions are such that multiple copies of a transmitted signal are received by the receiver, in a phenomenon called multipath (Figure 3).
Two significant effects result from multipath. The first is intersymbol interference (ISI), wherein one OFDM symbol disperses in time into its neighboring OFDM symbols. Adding a cyclic extension, called cyclic prefix (CP), to the basic OFDM symbol can mitigate ISI. The second effect, selective channel fading, results in a different channel response (both amplitude and phase) for each subcarrier. The receiver compensates for that channel response in order to reconstruct the transmitted information properly.
Additional algorithmic challenges requiring mitigation by the receiver include timing, gain and frequency lock; channel estimation; link adaptation; and measurements of channel conditions and neighboring cells.