Wireless 3G is finally seeing success. The experiences are that now the teething troubles have been resolved, attractive handsets with acceptable battery life are available, and customers appreciate the new services, volumes are starting to rise.
The economics of 3G are compelling: it delivers more capacity and more spectrum (or, at least, it does in most parts of the world); cost is much lower than 2G (a voice channel of WCDMA costs roughly half the price of GSM); revenue per user is so much higher (at least 20 percent higher on a like-for-like basis, according to DoCoMo); given that, and the fact that mobile operators (particularly those in Europe whom paid millions for 3G licences) are under massive pressure to start services as soon as possible the obvious questions is why are things still moving slowly?
One reason, which shouldn't be underestimated, is the complexity of the technology, and the huge resources and commitment required for a launch. But another reason is the pace of change. 3G standards are continuing to evolve, and within each upgrade, operators will look for the opportunity to differentiate their service offering through incremental performance upgrades. Worryingly for operators, the rate of change and need for upgrading will be much higher than in the GSM scenario. Of these, probably the most high-profile is HSDPA in Release 5 of WCDMA which enable 14Mbps downloads commercially significant, but technically difficult. Indeed, it is so difficult that it appears that few basestations available today will be able to support it without a "fork-lift" upgrade. The dilemma for a carrier is clear: deploy now, and face the inevitable need to replace systems in a year to get the new feature, or wait for the capability and risk a competitor stealing the market with a Release 4 product and "selling the roadmap".
Another related issue is the plethora of standards, and different options: all delivering wireless information to customers, but all in slightly different ways. Fixed or mobile, capacity versus coverage, voice or data, all have their attractions. While historically most places have been "mono-lingual", with GSM, more GSM, and then GSM, options today might include GSM, WCDMA FDD, TDD (data), and WiMax / 802.16 and the like. Incidentally, this proliferation reflects both changes in both technology and business model, with operators looking to implement fixed wireless as it leverages their asset of cell sites, which become more valuable as they get more difficult to get approval for. And even more options when you consider indoor applications such as a wireless picocells (which are starting to be deployed DoCoMo plan 4000 3G hotspots by summer 2004) and WiFi.
However, there is a newer trend emerging, that of a multi-modal structure, providing appropriate network access using multiple different network technologies and this has big implications for network builders and their equipment suppliers.
In this way software-defined basestations are becoming an attractive option merely by loading different code, you make the same hardware into a "different" basestation, or one platform can support several versions at one time. This addresses a number of worries: it removes the fear about obsolescence, it enables carriers to seamlessly upgrade to newer versions of the standard (e.g. HSDPA), and it offers an attractive way of supporting multiple air-interfaces ("the multi-lingual basestation").
China, which is likely to have all three 3G standards (WCDMA, cdma2000 and the locally developed TD-SCDMA not to mention 2G and "Little Smart" PHS) offers a complex market in which to operate and one where the multi-lingual model would be attractive. A single basestation can then support integrated, seamless roaming between these standards: perhaps TD-SCDMA for data-rich applications in towns, WCDMA for voice and multimedia, with legacy GSM services for legacy voice customers, WiFi for hotspot coverage and even WiMax for backhaul transport.
This would obviously be attractive for the carrier: enabling more services and more revenue from one system, while avoiding the headaches on upgrade paths. However, the same logic also makes it attractive to OEMs: by developing a common hardware platform, and loading it with software for each protocol, the market size is increased while dramatically leveraging return on investment (RoI). This is analogous to some car companies' philosophy: once you have designed a "platform" you can easily sell flavoured versions (Skoda, VW and Audi share a common basis, but appear as very distinct products for different markets). Spreading costs across different products improves leverage, and doing it with software is an even more attractive option.
More philosophically, this might just put an end to standards feuding: if you have a "multi-lingual" basestation, then it is much less worrying which standards may be selected in a particular country, as the basestation can support all of them transparently.
Maximizing the number of software-defined components and modules is the key to ensuring that there is maximum platform re-use and cost-effectiveness. In practice, this means the baseband portion is programmable, driving dedicated RF blocks. With digital tuners and wideband converters the technology exists for versatile IF and mixed signal stages, but whether this is optimum depends on the system architecture (for example, it may be most efficient to route baseband digital IQ to dedicated radio cards, where mixed signal, IF & RF are integrated).
But for technical, regulatory and deployment reasons (cell planning and the number of sectors, antennas, carriers) the RF portions and power amplifiers differ between standards and for the foreseeable future they will be fixed frequency band specific and application specific, i.e. one RF stage is "frozen" for one application. Even multi-mode radios for the military Joint Tactical Radio System (JTRS) project, where cost is not a significant constraint, have used multiple RF subsystems driven by a software-defined baseband.
Because of the different implications of different standards, it is likely that a multi-mode system will have a somewhat different system design to a traditional approach. Architectures like ATCA, designed for a variety of carrier-class high-throughput applications might be well suited especially as the ability to use standard mechanicals should further reduce cost. While OBSAI (Open Base Station Architecture Initiative) has tried to consider solving the re-usable platform this is aligned to one approach and in other cases may even add cost rather than reduce it.
Similarly, traditional DSP implementations are clearly "software-defined" but do not deliver sufficient processing power to support this approach. Some architectures use a hybrid approach, perhaps with a mix of DSP and FPGA. However the mix of types of tasks will differ from protocol to protocol (for example, the need beam-forming and joint detection in TD-SCDMA requires additional chip-rate resources, while the addition of HSDPA to WDCMA makes the scheduler and MAC more complex). Heterogeneous technologies will be inefficient in this case, as you would need to support the worst-case loading in each area independently, all of the time even though no application would require this worst / worst provision. Furthermore, the hybrid nature compounds the functional partitioning headache, co-simulation etc. There is a need for a seamlessly scalable, more granular all-software approach than FPGA and DSP for cost efficient low TTM.
Instead, processors with an order of magnitude more performance are necessary, so that a single environment can address the variety of tasks, and move resources between them as and when required. If the cost is to remain acceptable, this actually requires an order of magnitude improvement in price-performance. Fortunately, such devices do exist, capable of delivering 40GMACs for the price of a legacy DSP.
Of course, while the transition to software may have significant advantages, it is no panacea far from it. One still has to implement the time-consuming aspects of baseband processing and control for each standard and these require complex test and verification. If development time and statistical test are not to "balloon" unacceptably, the baseband processor architecture must consider ease of development, verification and test. The multi-lingual baseband processor must address these crucial factors and they must be considered during the conception of the development environment. An efficient solution is born out of the right compromise between granularity of processing elements, ease of programming those elements, ease of partitioning of the desired signal processing architectures across those elements.
Software defined implementations are now available for a number of commercial air-interfaces (in addition to the military waveforms), with WCDMA, GSM, WiMax, WiFi and the like supported.
The China situation has already been mentioned but one should also consider domestic opportunities. The success of wireless data, and the way that regulation has permitted a variety of protocols, it is likely multi-lingual basestations will be most valuable in the land of the melting pot: AMPS (analog), GSM, CDMA, TDMA (legacy), iDEN, WCDMA, cdma2000 and then to the future with TDD, WiMax, WiFi and so on.
Rupert Baines is vice president of marketing at picoChip (Bath, UK)