As system-on-chip (SoC) devices are called upon to perform more computation-intensive functions, the amount of data, and the speed with which that data must be moved to and from peripheral devices, is increasing rapidly and pushing the uppermost bounds of performance. To mitigate the performance limitations, buses have traditionally been made wider and faster. Most PC buses for example, are 64 bits wide, clock multiple data bits per clock cycle and operate with clocks up to 200 MHz to support data rates up to 6.4 Gbytes/second. But these bus architectures are rapidly approaching their limits, as minimizing the skew between bus lines and the clocks becomes more difficult in mass-produced printed-circuit boards and cables.
One solution lies in source-synchronous operation, where the device produces its own clock that travels in parallel with the data. In this way, the clock suffers the same delay and drift as the data, enabling the data to be reliably clocked into the receiving device. HyperTransport and DDR memory buses use this approach. When speeds reach greater than about 1.5 Gbits/s per channel, even the small path length differences of going to a different device pin can cause problems. At these speeds, the datacom technique of embedding the clock directly into the data protocol becomes the preferred solution. New PCI Express and Serial ATA standards employ such embedded clock protocols.
A wide range of devices including graphics chips, chip sets, hard-disk-drive controllers, and DSPs are now starting to use data ports that involve either source-synchronous or embedded clocks. The widespread use of these techniques has profound test implications for SoC devices using these new standards.
In a test environment the tester is usually the master, providing the clock that determines the timing of all other signals to and from the device under test (DUT). For source-synchronous this leads to longer test times, since one first has to search for the device output clock edges in order to align the tester clock. The best solution is to use the device clock to latch the data into the tester, so the tester is no longer the master of the test environment. This, however, requires a brand new design of tester pin electronics and a tester architecture that can support multiple independent time domains.
The problem becomes even more difficult for embedded clocks, where a conventional tester pin electronics design is completely unsuitable. For these protocols the clock must be extracted from the incoming data, using clock data recovery (CDR) techniques, and then used to synchronize the data stream. This requires new instrumentation that can straddle the conventional divide between digital and analog and provide the means to perform datacom-style tests.
With the data rates for these new device ports ranging from 1 Gbit/s to over 10 Gbits/s, new parameters become critical to assure reliable data delivery. The traditional digital test used to assure adequate device timing margin is setup and hold time, but since data timing is no longer relative to an external clock, this test is no longer relevant. The internal phase-lock loop operation of clock-embedded devices is sensitive to jitter however, so jitter tolerance is critical. Accordingly, test instruments must allow superimposing of jitter on signals sent to the DUT and measure the amount of jitter produced by it. Measured jitter has two components: deterministic jitter, which is dependent on the data sequence being transmitted, and random jitter, which has a Gaussian probability distribution and is essentially noise in the time domain. With the right measurement technique and mathematical capability, analysis performed after the jitter measurement can isolate the two components.
Sampling technology can be used to perform the jitter measurement, and suitable 10-GHz bandwidth sampling front-ends are available for that purpose. These samplers can be employed per channel within the embedded clock instruments at rates up to about 3.5 Gbits/s. At higher rates, samplers need to be located physically on the DUT board so measurements are not adversely affected by the interconnects to the tester.
Connections between the instruments and the DUT are critical at these high data rates. In most test environments these connections include one between the tester and the DUT board and another coupling the DUT board to the device under test. For the new bus standards, high-quality zero-insertion force connections are suitable, but at 10 Gbits/s special care of DUT board design and the use of exotic board materials add to the challenge of maintaining good signal integrity between the test instrumentation and the DUT in blind-mate applications.
While production test lists are still being designed, bathtub curves, which show bit error rate as a function of jitter, are being borrowed from the datacom test suite. The key datacom figure of merit is bit error rate, since in a communications system the only thing that matters is the transmission of error-free data. This test, however, is impractical in a production environment because of the long test times involved. The bathtub curve, developed to predict bit error rate in a production-efficient manner, will be used to test these new interfaces along with jitter measurements and eye diagrams.
Digital testing of SoC devices is being radically revised as a result of the incorporation of clock-embedded data ports. Today that is leading to the development of specific instruments that can extract the clock and data and perform datacom-style tests. In the future, as these ports become ubiquitous, this will likely lead to a change in the definition of what is considered a tester digital pin, as we have already seen with the incorporation of source-synchronous test capability.
Neil Kelly is chief technology officer at LTX Corp. (Westwood, Mass.).
See related chart