MOUNTAIN VIEW, Calif.--As the level of chip and component integration increases, so too does the problem testing the ever more intricate products, with engineers struggling to keep up.
“The industry needs high volume test at low cost,” said corporate vice president and CTO of AMD's client division Joe Macri at UBM’s Design Con in Santa Clara recently, noting that the high levels of interconnect were beginning to cause significant test problems.
“The way we’re integrating today is different,” he said adding, “it’s not just about putting everything on a single die, at times it’s how you slice different parts of the system on multiple die.”
Brad Davis, Broadcom hardware design engineer and winner of UBM’s prestigious Test Engineer of the Year award admitted that the testing dilemma was “a challenge.”
“Test companies are getting prototype test hardware out as firms are getting test chips out,” he said, noting that firms were having to work more and more closely with test houses in the early stages of product development.
When problems are discovered, both the chip company and the test house work together to see whether the issue is related to the testing equipment or the product itself, said Davis.
“It really requires a tight relationship with the test company,” he said.
Macri told EE Times that design for testability used to be something design engineers laughed at, but said it was probably becoming one of the more important skills in the design community today.
“If you don’t design for test, it might work, but you won’t be able to go into high volume with it, and if you can’t go into high volume, it basically didn’t work,” he explained.
The conundrum makes it all the more critical for design engineers to provide visibility hooks into what they’re doing, said Macri.
“In many cases we’re pulling pieces of the test programming into our silicon or into our own test software,” he added.
Check out the video interview below, and if you have any thoughts on how to tackle the test problem, let us know in the comments.
I have been hearing about designers ignoring DFT since the 90s, and yet on every design team I have been a member of, DFT was always a major task and was addressed up front -- as a much a part of the chip requirements as any functional requirements.
You're right, analog testing is still lacking automation and this is a big problem in reducing test time for AMS SoCs. Hopefully the EDA community will put some effort into that.
For too long designers have focussed on getting every bit of functionality and relegated test to the backend. However, DFT is taking foothold in almost all design groups, and helping ease the manufacturing problem. Most of DFT technology has been developed to tackle digital testing, and analog, m/s testing is still a problem, not well-defined and left to smart engineers to come up with solutions. Need more work in these areas to automate testing of SoCs populated by a mix of digital, analog, m/s cores and logic.
Increasing no of peripherals and cores puts the "design for test" in a critical situation as the area covered by the test sets increases then the required by the actual chip. But this will be solved as the "Design for Test" is a hot topic among the researchers.
Testing and debugging Silicon Chips is much more complicated as compared to software programs. If the design does not work out there is no means of debugging and correcting once done. All the tests are made in simulation environment, where as in case of software you can come with as many versions of a simple .dll as well.
Fundamentally, testing and debugging silicon chips is the same problem as testing and debugging software programs. The only difference is that one ends up in silicon and wires whereas the other is stored in magnetic domains. Start by studying best practices in software programming and quality control.