With the introduction of LTE Advanced (Release 10), smart phones have become far more functional, but also far more complex. Consumers require ever more sophisticated products, pressuring design teams. “Trying to summarize LTE in one single challenge belies its complexity,” says Moray Rumney, lead technologist, Technical Leadership Organization at Agilent Technologies, and editor of the book. He sits on the 3GPP radio access network (RAN) working group 4 (WG4), which is responsible for developing the air interface standard for HSPA+ and LTE-Advanced. “In that sense the single biggest challenge is deciding from the nearly infinite number of combinations of features what to actually design.” To streamline product development cycles and help design engineers deliver the most effective product to market, Agilent Technologies Inc. has updated its 2009 text LTE and the Evolution to 4G Wireless: Design and Measurement Challenges, adding over 180 pages of material on carrier aggregation, over-the-air (OTA) testing and non-signaling test methods aimed at manufacturing. Read our second excerpt from the text below, courtesy of Agilent Technologies and John Wiley & Sons. You can view the table of contents and part one of the excerpt here, and access the book page here; for a 20% discount off cover price, enter code VBD11 at checkout.
There was a time testing via a direct connection could provide an effective assessment of device performance. No longer. Multi-band devices require over-the-air (OTA) testing in order to ensure they can operate effectively. “The demand for more frequency bands and higher orders of multi-input multi-output (MIMO) devices will significantly complicate antenna design and until we get MIMO antenna tests defined and performance requirements defined, designers will have nothing to aim at and we will not know how bad the problem is,” Rumney says. The excerpt below focuses on over-the-air testing techniques, reviewing history, motivation, and the development of the standard.
6.10 SISO and MIMO Over-the-Air Testing 6.10.1 Introduction Traditionally, most testing of mobile devices has been done through directly cabled (galvanic) connections to the device’s temporary antenna connectors, which are the ports used for conformance testing. This method is appropriate for the vast majority of performance tests since it is convenient and is not susceptible to radiated noise or interference in the test environment. The downside of this type of conducted testing; however, is that it ignores the performance of the device’s antennas, and any fault or performance problem related to the antenna design or manufacture goes unnoticed.
Earlier generation mobile phones, operating perhaps in only one band with a traditional pull-out “whip” quarter wavelength antenna, had intrinsically good antenna performance such that conducted measurements of parameters such as maximum output power and reference sensitivity were good indicators of how the device performed “over the air” (OTA). But with the introduction of multiband devices and the continued desire to reduce device size, and more recently with the introduction of MIMO technology, the pressure on antenna design has significantly increased to the point where it is no longer safe to assume that conducted measurements bypassing the antennas will be a good indicator of radiated OTA performance.
The first OTA tests were standardized for single input single output (SISO) devices by CTIA in October 2001 . Work to define OTA tests for multiple input multiple output (MIMO) devices started around 2007. At the time of this writing, MIMO OTA standardization activities have not yet completed and so this section will provide an interim summary of the status of this important work. 6.10.2 SISO OTA Overview The first SISO OTA test specifications were published in “CTIA ERP Test Plan for Mobile Station Over the Air Performance”  in October 2001 by CTIA. These tests defined two metrics for the device; total radiated power (TRP) and total isotropic sensitivity (TIS). TRP is defined as the integral of the power transmitted in different directions over the entire radiation sphere. Total radiated sensitivity (TRS) is a similar measure, but it represents the reference sensitivity of the DUT receiver averaged over the same sphere. The first CTIA specification defined a test procedure for measuring TRP and TIS inside an anechoic chamber. Most of the work in defining the test procedure was related to the calibration of the test system, which was calculated—by means of a very detailed error model of some 20 terms—to be around ±2 dB. This uncertainty figure was subsequently confirmed by a substantial measurement campaign, at which time reference devices were circulated among many laboratories. It is important to note that the measurement uncertainty obtained by the new test procedure is not as good as can be obtained by conducted test methods, but the advantage of including the antenna in the overall DUT performance far outweighs this slight loss in accuracy. It is worth noting that measurement uncertainty is defined only for a small test volume within the chamber known as the “quiet zone.” The size of the quiet zone is correlated to the size of the anechoic chamber and inversely with the frequency of test, with most chambers aiming to keep a distance of three meters between the DUT and the probe antenna.
Once standard metrics and a test procedure with bounded uncertainty had been established, it was possible for device vendors and network operators to independently measure legacy and new devices in order to compare radiated performance. In the early days of testing, significant differences in device performance were uncovered that were attributed to the free space antenna performance. In addition, tests were defined that included the loading effects of a head “phantom,” intended to emulate the electrical properties of a human head These tests uncovered further performance differences between devices which were attributable to antenna design. CTIA did not set specific performance requirements but did enable the industry to comparably measure SISO OTA performance. The results were used subsequently by operators who could then set their own requirements as part of device acceptance testing (see Section 7.5).