System-on-chip (SoC) testing presents unparalleled challenges that require a fundamental change in thinking for both IC manufacturers and tester makers. The operating speed of digital logic in gigahertz and associated physics of transmission line and signal reflections alone are enough to require new testing methods and equipment. The use of new materials (copper, low-k intermetal dielectric and high-k gate dielectric) compounds that problem because of new defect mechanisms.
In SoCs, these difficulties further increase by n-fold because of integration of multiple types of blocks. That includes microcontrollers; SRAMs, DRAMs, content-addressable and flash memories; PLLs and DLLs; A/D, D/A and sigma-delta converters; voltage regulators and power amplifiers; bus functional controllers like PCI and USB controllers; high-speed interfaces like serial link and serdes; RF generators and receivers; and high-speed and differential I/Os. Each of these blocks requires a unique test method and specialized test equipment as well as a test engineer who is familiar with the intricacies of the test method applicable to that specific block.
In the past, there was a clear-cut distinction in IC type and subsequently in test equipment. From test engineers to Wall Street analysts, the categorization of all test equipment was: (1) logic testers; (2) memory testers; (3) mixed-signal testers. But SoC testing requires the functionality of all these testers, plus more. At a minimum, all this functionality increases the cost of the tester until it becomes the primary cost factor in SoC test cost. Traditional logic or mixed-signal testers are just too deficient for SoC testing; extensive time overhead, logistics in material handling and a direct hit on factory throughput does not allow multiple insertion using multiple testers for an SoC.
See related chart
IC makers purchase a tester with a 10- to 15-year lifetime in mind and expect to test a wide variety of ICs with it, so both hardware and software components of the tester are required to work over a wide range of operations. SoC testing also puts extensive functionality demands on the tester. Thus, SoC testers contain myriad resources that are not used most of the time. These idle resources are pure overhead from an IC manufacturer's point of view. At the same time, a tester still may not have the most desirable resources suitable for a given SoC. For example, a tester that is suitable to test a complex SoC that contains an embedded microcontroller, large embedded DRAM and various other cores (such as D/A, PCI and USB) may be inadequate for another SoC-even when it does not have embedded DRAM, but instead has embedded flash, and instead of D/A it has sigma-delta converters and an RF block.
The primary reason for these difficulties in SoC testing is the specialized and fixed tester architecture. Today, each tester manufacturer has a number of platforms in which all hardware and software remain in a fixed configuration. This also leads to cross-platform incompatibility that restricts the development of third-party solutions. This fixed configuration also requires dedicated test programs using some of the tester capabilities to define test data, signals, waveforms, current and voltage levels and so on. Because the test program uses specific tester resources, the program is also fixed and cannot be reused or ported to a different tester without reengineering.
To a very large extent, these difficulties can be addressed if the tester architecture is modularized and reconfigurable depending upon the need. The basic idea behind the Open Architecture test system is to provide such modularization with specific focus on the use of third-party modules and test instruments. The Semiconductor Test Consortium (STC) has specified the hardware and software tester framework Openstar and standardized interfaces so that modules from different vendors can be used in a plug-and-play manner to achieve the best desired functionality for a given SoC. The general structure of a hardware module is shown in the figure-it can be any functional unit, such as a digital pin card, an analog card, device power supply, or instruments such as a waveform generator. The physical connection to the module is obtained by a system bus that is either a dedicated Openstar bus or an optional PXI bus.
Based on the modules comprising either single or multiple test sites (to support parallel device under test, or DUT, testing), various system configurations can be developed, such as:
- Homogeneous system. Each site is identical and is composed of identical modules.
- System with compatible sites. Each site may contain different modules; however, if the module compositions/partitions within a site are the same as at other sites, the sites are compatible.
- Heterogeneous system. When two or more sites are composed of different modules of different types and at least two sites differ in internal module compositions/partitions, then the system is heterogeneous.
At the International Test Conference earlier this month Advantest announced a new tester, T2000, based on the Openstar platform. In it, up to 64 optional modules can be placed in any location within a testhead, while the tester can be configured into one to eight test sites. As this structure allows reconfigurability and plug-and-play of modules, it minimizes idle tester resources while still achieving the optimal test configuration for a given SoC.
Rochit Rajsuman is chief scientist at the Advantest America R&D Center (Santa Clara, Calif.).