Advances in analog-to-digital converter technology have allowed designers of basestations to more seriously consider the implementation of a software-definable radio for cellular-transmission specifications.
The idea of such a radio-based on a programmable and reconfigurable architecture-that meets protocol-specific needs is gaining momentum because of the advantages that digital processing provides to the overall system. The improvements in selectivity and dynamic range offered by digital processing, combined with its inherent temperature stability, predictability and repeatability of operation offer am appealing alternative to analog.
Moreover, the argument to go digital with receiver design becomes even more compelling when the programmability that allows for adherence to multiple air-interface standards and the front-end reuse of analog circuitry come into the equation. The purpose of a programmable downconverter (PDC) for this application is to receive a wideband digitized signal from the analog-to-digital converter and perform the appropriate decimation, filtering and post-processing functions to allow for proper handling of the link established between the handset and the basestation. It also must meet protocol-specific bandwidth and dynamic-range requirements.
But what makes a good downconverter? The answer can be confined to two important characteristics: The PDC does not corrupt the signal of interest and it offloads as many functions as possible from the baseband processor to maximize the use of this precious resource.
While the ability to maintain signal integrity in the presence of numerous sources of interference is an obvious figure of merit for a PDC, it is by no means a trivial task. Keep in mind that the limiting factor for dynamic range in the processing of the incoming signal for wideband applications lies in the A/D converter. Use that as a starting point to determine what kind of dynamic range needs to be carried through the PDC.
New high-speed A/D converters offer 14-bit resolution at speeds of 65 Msamples/second. Under those conditions, the stated signal-to-noise ratio (SNR) performance at an input frequency of 20 MHz for these circuits is approximately 75 dB, which can be normalized to a noise figure (n0) of approximately -150 dBFS/Hz. Since the power has been normalized to a per-hertz basis, we can now do power comparisons over a larger bandwidth by assuming that the power will be uniform across the given bandwidth of interest.
For example, in an IS136 (North American TDMA) application, the required minimum noise floor level for the PDC can be calculated by the following equation:
-150 dB + 10 log(30k), or -105 dB
In reality, however, more must be added to this figure to assure that the PDC does not add any additional distortion to the signal being received. As a general rule, keeping the SNR level of the PDC 10 dB lower than that of the A/D converter assures minimum degradation-on the order of 0.5 dB-of SNR. Taking this into account, a -115-dB target can be set as a minimum acceptable PDC signal-to-noise performance throughout the tuning and filtering process, within the 30-kHz bandwidth of interest for this protocol. A similar analysis can be performed to determine the needs for the GSM protocol, which results in an SNR level of about -107 dB over a 200-kHz bandwidth.
To assure that this level of performance is achieved within the PDC the oscillator/mixer (NCO) portion of the circuit (which is located at the front end of the PDC and performs the tuning and quadrature split-or separation of I and Q components--that functions prior to filtering) must have adequate noise floor and overall spurious performance and sufficient tuning resolution to assure maintenance of signal integrity. Basically, this is because spurs generated by the local oscillator are convolved with the input signals through the mixing process.
For example, if there were two input signals at 90-dB separation, spurs at -100-dB generated by the NCO would convolve with the larger signal to produce images only 10 dB below the smaller signal. Whether or not these images could be attenuated by the filtering process depends in large part on the location of the image itself, which is largely dependent on the overall frequency plan of the system.
The overall tuning resolution of the NCO-defined by the number of bits within the circuit portion of the circuit-is also important because it will determine the center point of the bandwidth being processed. Errors in tuning accuracy can result in truncation of the signal being processed, which is obviously undesirable. Once the tuning and mixing functions have been completed, the quadrature data has been properly conditioned for the initiation of the decimation and filtering process.
During the process of channelization or decimating the band of interest and filtering out all unwanted signals, the job of filtering is usually divided into multiple filter stages-at least two-to allow for a more-efficient filter implementation. While there are many filters, there are three general filter classifications that are used most commonly to do the job. These include a Cascaded Integrator-Comb (CIC), a Halfband filter and a Finite Impulse Response (FIR). Each has its own strengths and weaknesses that can be exploited or compensated for when they are used in tandem.
Basically, the CIC represents an efficient filter implementation that is characterized by a gradual transition band rolloff-defined by the number of stages used in the implementation-excellent alias attenuation in the band of interest and a relatively flat passband over a limited range. The downside is that its output must be oversampled-typically by at least eight times-in an attempt to assure that the bandwidth of interest stays in the flattest portion of the passband possible, as well as to maximize alias attenuation characteristics of the filter. It is usually implemented as a fixed coefficient filter with programmability being provided only in terms of number of stages and the amount of decimation desired. For these reasons, the CIC is normally the first filter used in the chain because it provides an excellent method not only of decimating the bandwidth of interest, but also in attenuating out-of-band signals that fold back in because of Nyquist sampling.
The Halfband is also typically implemented as a fixed coefficient filter; however, it also represents a less efficient implementation than the CIC. Its frequency response is characterized by a very flat passband. The output must also be oversampled-typically by a minimum factor of 4-and the normal implementation calls for a decimation ratio of 2.
The FIR is the most flexible filter implementation. It's fully programmable, has a low oversample ratio and typically supports integer decimation factors. Although less efficient than a CIC because it needs more taps, it can support gain needs and, given enough available taps, can provide the user with steep transition bands and good out-of-band attenuation. An important specification for the FIR filter is the coefficient width (or the number of bits in the coefficient) available as it will define its out-of-band attenuation performance.
A good rule of thumb to use when estimating the out-of-band attenuation for the FIR is approximately 5 dB of attenuation per bit width in the coefficients. The deviation from the classical 6.02 dB per bit is due to quantization effects and zero placement in the filter implementation.
By selecting one or more of these filter types, the user can exploit each individual filter's characteristics to create the desired composite frequency response necessary to meet the needs for the application.
By applying the level of filtering described, an efficient method of performing the decimation and overall filtering process has been provided. Through the tuning, mixing and filtering processes, we have been able to take what was originally a large signal bandwidth consisting of multiple channels of information and reduce it to one specific channel that has been decimated to baseband. However, prior to passing the data on to the baseband processor it is necessary in many cases to implement certain post-processing functions that add value to the overall system by offloading the processor itself. The first of these functions comes in the form of automatic gain control (AGC).
Through this point in the processing of the signal of interest a large amount of dynamic range-typically 20-plus bits-has been carried throughout the downconverter chain. The addition of gain control circuitry in the PDC allows the user to maximize the dynamic range, which has been carried throughout the filter sections. In cases where 16-bit baseband processors are used the presence of AGC in the downconverter circuit assures that the received signal always remains within the dynamic range available.
For optimal performance in burst systems, gain control should be performed at the beginning of the burst to optimize tracking control and assure data integrity. The amount of AGC needed is typically small, typically on the order of 10 dB to 30 dB, and in cases where the output data rate is a multiple of the bit rate the amount of slew necessary per bit is also small.
With an AGC range on the downconverter programmed to 30 dB and a slew rate of 0.75dB/sample, the effect of additional gain control can clearly be seen in the time domain-at the 16-bit level on the output of the device-and in the frequency domain (dp/dt).
It is also important that for wideband systems, this type of gain control cannot be done at the RF because the RF circuitry is processing multiple channels of data. Since the filtering in the PDC has eliminated all out-of-band signals, the gain control performed by the circuit can be done on the channel of interest only, thus assuring that the appropriate amount of gain is added to maximize efficiency.
Another important function provided by a PDC is resampling. For basestation applications in which multiple transmission protocols are being processed by one system, the presence of a resampler circuit provides the basestation designer with an invaluable tool to simplify overall system design. The resampler circuit allows the output sample rate to have a non-integer relationship to the input sample clock and, in the process, allows for the use of a common sample clock to be used when receiving multiple signal types.
For example, the IS95 CDMA system requires precise timing related to the 1.2288-MHz chip rate (slaved to the GPS system). In order to receive an IS136 TDMA signal with the same A/D converter without a resampler, a common multiple of both system data rates would need to be used (1.2288 MHz for CDMA and 24.3 kHz for TDMA), which one quickly finds to be in the neighborhood of 400 MHz-assuming the output data rate in CDMA is four times the chip rate to assist in the correlation process. The situation only gets worse if one wants to process GSM signals in the same system with a common clock frequency well into the hundreds of gigahertz.
Conceptually, a resampler circuit can be thought of as a fixed interpolation filter followed by an NCO-controlled decimator. The interpolation process creates images of the original signal at fixed integer multiples of the original data rate, and the NCO-controlled decimation process allows for non-integer decimation to adjust the data rate back down.
With the emphasis being placed on multistandard basestations because of legacy concerns, the presence of a resampler circuit enables basestation designers to design their systems for this highest data rate possible for the protocols being considered and use the PDC to adjust data rates down as needed to meet protocol-specific needs.
As momentum builds for software radio technology, semiconductor makers will continue to offer downconverter technology enhancements in the form of increased functionality and density. The adoption of new global standards will also drive manufacturers to provide solutions that can process wider bandwidths while offering similar overall functionality and improved performance.