Recent surveys of working engineers in the electronics industry by the International Electronics Manufacturing Initiative (iNEMI) found that testing memory and memory buses on circuit boards is one of the most pressing problems for designers and manufacturers (www.inemi.org). Many factors have contributed to this problem.
First, there is the disappearance of test pads on circuit boards to enable design debug with oscilloscopes and logic analyzers in design, and, in manufacturing, bed-of-nail fixtures for in-circuit test (ICT), manufacturing defect analyzers and flying probe systems. Second, there are the restrictions on placing any sort of a test probe on high-speed memory buses because of the capacitive signal distortion created by the probe. Third, memory bus protocols are becoming increasingly complex. And, finally, there are a host of other factors, including manufacturing process variations.
Fortunately, a number of non-intrusive board test (NBT) or probe-less methods for testing memory, including boundary-scan test (BST), functional test, processor-based testing methodologies such as processor-controlled test (PCT), FPGA-based testing mechanisms such as FPGA-controlled test (FCT) and embedded memory built-in self-test (memory BIST) can be deployed without relying on probes or test pads on circuit boards. Each method has its advantages and disadvantages. Implementing any one method will involve tradeoffs. Deploying several can deliver the debug and test coverage no longer possible with legacy methods. This three-part series of articles will describe these various non-intrusive debug and test methods and explain some of the most salient tradeoffs. In particular, some of the complexities involved with testing the high-speed DDR (double data rate) memory bus will be explained.
What’s the problem? One of those iNEMI surveys asked engineers what their biggest problems were with testing circuit boards. Of the 11 possible problems listed, characterizing and testing memory soldered to circuit boards was among the top three. Also at the top of the list was ‘loss of access to test points’ and ‘the need to perform debug/diagnostics on board failures.’ When asked which type of built-in self-test (BIST) instruments would solve the engineer’s problems best, memory BIST was rated the second most needed, virtually tied with BIST instruments for validating high-speed I/O buses. Clearly, the ability to thoroughly test, characterize and diagnose problems with soldered-down memory is one of the most pressing problems in the industry.
In most cases, board designers who are doing board debug as well as manufacturing test engineers assume that the memory devices themselves are not causing a failure since the chips are tested and qualified before they are assembled on a board. As a result, a memory test failure should indicate a failure in the connectivity channel to the memory. Previously, when memory speeds were not as high as they are today and communications protocols over memory buses not so complex, performing static shorts-and-opens testing on memory interconnects might suffice. Now though, signal propagation through passive devices such as capacitors and the signal integrity on high-speed traces to memory must be validated and characterized for an open data window. Often, this data window will demonstrate sensitivities to clock jitter, temperature, electrical noise as well as the level and stability of the voltage.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.