If you ever stop to think about it, the precise synchronization of a worldwide cellular network is pretty incredible. However, as complex as the entire system seems, you can boil down the design of a receiver to a few basic parts, from the analog front end to the baseband processor on the back end. This article examines the modulation measurement process on over-the-air cellular traffic as an academic exercise. Note that all packets of cellular data are encrypted, so we won't be able to piece the data back into the human voice, but we can still measure the transmission's modulation quality. Using an NI PXI-5661 vector signal analyzer and NI PXI-5690 preamplifier (low-noise amplifier) along with a few simple programming blocks, we can demodulate the Global System for Mobile Communications (GSM) packets using the NI Modulation Toolkit for LabVIEW . During each stage of the demodulation process, you can use several signal processing tools as a sanity check to ensure that each stage of your software-defined radio is working correctly.
Editor's Note: You can view David's dynamic step-by-step video demonstration of the method described below by clicking here.
Considerations for Demodulating Off the Air
Before demodulating the over-the-air transmission, review some of the basic characteristics of the GSM physical layer. According to the standard's specifications, GSM uses the Gaussian minimum shift keying (GMSK) modulation scheme. With a Gaussian filter (alpha = 0.3) and a symbol rate of 270.833 kS/s, GSM has a total RF bandwidth of slightly less than 200 kHz. One interesting difference between the 2G and 3G cellular standards is the channel bandwidth. In contrast with GSM, wideband code division multiple access (WCDMA) has a symbol rate of 3.84 MS/s. Using a root-raised cosine with an alpha of 0.22, we can calculate the RF bandwidth to be ((1+0.22) x 3.84) = 4.68 MHz. In fact, it's an interesting exercise to view this in light of Shannon's theorem, which estimates the maximum data throughput as a function of bandwidth and SNR (signal-to-noise ratio). The equation is shown below:
Capacity=Bandwidth x log(1+SNR)
Equation 1: Shannon's Theorem
As Shannon's theorem suggests, wider bandwidth channels used in 3G networks should provide significantly higher data throughput than the 200 kHz channels in GSM.
Now that we are familiar with some of the basic characteristics of GSM, we can begin the process of demodulating data packets. The first step is identification of the appropriate carrier. We know that each channel in both the uplink and downlink bands of a cellular network is spaced 200 kHz apart. In the United States, which uses GSM850, we can find the GSM downlink band between 869 and 894 MHz. Using a vector signal analyzer in spectrum analysis mode, we can find a modulated GMSK carrier with the guidance provided above. Using this tool, we are able to visualize modulated carriers in the frequency domain, as illustrated in Figure 1.
1. Visualization of GMSK in the Frequency Domain.
2. Visualization of WCDMA in the Frequency Domain
Figures 1 and 2 show the difference between GSM and WCDMA in the frequency domain. As the images illustrate, GSM channels occupy 200kHz of bandwidth, and we can easily spot a carrier at 877.4MHz. In the United States, we can spot a WCDMA carrier at around 1.882GHz. Upon quick inspection, we can visually estimate that it occupies slightly less than 5MHz of bandwidth, fitting with our earlier calculation of 4.68MHz. Through basic spectrum analysis, we have performed the first of several sanity checks by verifying that the carrier we want to demodulate is indeed GSM. This exercise also uncovers potential challenges we might face in demodulating the GSM transmission. As we observed in the frequency domain, interference from adjacent channels can potentially distort our results if it is not adequately filtered. In the next section, learn about the effectiveness of filtering at removing adjacent channel interference.
The second tool we'll use to analyze the airwaves is the Gabor spectrogram. Because GSM uses a time-division multiple-access channel timing model, this tool will help us visualize both the frequency of GSM bursts and the trigger level that we will need to set later. Note that a Gabor spectrogram is a signal processing tool used for joint time-frequency analysis. It operates by dividing a time-domain IQ signal into "slices" and performing a fast Fourier transform (FFT) on each subset of data. The resulting FFT data is then pieced together in a 3D view, which plots the time, frequency, and power of an RF signal. This is often visualized on a color-coded waterfall graph, as illustrated in Figure 3.
3. Visualizing the GSM Spectrum in the Joint Time-Frequency Domain
Click for larger image
In Figure 3, we can observe the timing information in the downlink band. As a TDMA (time-division multiple access) protocol, each GSM frame period lasts for 4.615ms and is divided into eight sub frames. Thus each sub frame has a duration of precisely 567.875¼s. Also note that the Gabor spectrogram provides insight into the peak-power of a GSM burst. According to the graph, setting our power level above -25dBm is essential to ensure that you trigger on a burst with sufficient SNR.