If you cannot say for sure that your tests really do determine whether the circuit works or not, there is no point in performing the tests in the first place.
Despite the normal inane questions showing no basic understanding of electronic design process, the ISO auditor asked a question that gave me pause, not to mention an NCR (Non Conformance Report) to our company: How do you ensure that your test equipment is, in fact, testing correctly?
We design most of our own test jigs and there is always the question as to whether your tests actually do determine whether the circuit-under-test does work or not.
This is, of course, a fundamental question. If you cannot say for sure that your tests really do determine it the circuit works or not, there is no point in performing the tests in the first place. But the auditor's question took that as a given. What he was asking can be broken into three parts: calibration, is the equipment functional, and how do you know there are no false positives?
We have evolved an approach to calibration over the years. We no longer calibrate our test jigs annually. We did try that, but keeping the records was troublesome and -- more importantly -- took an engineer away from his design work, often for extended periods and -- by Murphy's Law -- at the most inconvenient time. What we do is that, where possible, when a measurement is made and observed, we use external equipment such as DVMs (digital voltage meters), frequency counters, and oscilloscopes that are calibrated by an external agency.
Where there is an automated jig, the jig must be calibrated every time it is used. This is enforced by the software, which won't allow the test to proceed until the calibration is complete. We often have a digital-to-analog converter (DAC) on the test jig, so the output is calibrated using a DVM, and this is fed back to the analog-to-digital converter (ADC) inputs. This actually has a dual function in that it also partially confirms that the test jig is working, but I am getting a little ahead of myself. If there is no DAC we use an external calibrator. This approach appears to satisfy the ISO auditors as well as simplifying our lives.
When the test department hauls the test equipment off a rack and sets it up for use, what guarantees that it is still working? Our standard answer is to use a "golden" board. This is good up to a point, but if the test fails, is the board or the test jig at fault? Where do you start troubleshooting? Of course, there is no clear answer.
You could use multiple golden boards, but maintaining the stock can be troublesome and – worse -- what if the test jig is blowing up the boards? I have been known to add feedback relays on the test jig to switch outputs back to inputs so that both can be checked, but this can quickly expand into a piece of equipment that is unwieldy and unreliable because of the self-test features, to say nothing of the time it takes to develop the additional software. A further issue is how do you test that the test jig still picks up all known (and even unknown) faults? Creating checks for this can be rather difficult, or should you keep a stock of products with those faults?
Now, consider where some component on the test jig fails and the erroneous data looks like the good data. For instance, let's assume you are measuring an output from the DUT (device under test) going high. The input to the test jig has a pull up resistor. If the pin on the bed-of-nails is not making contact, the output is still high. The idea is that you have to consider these possibilities when you are designing your test jig. The NCR from the audit forced us to develop a formal approach to detailing how the test jig would react to internal faults; essentially a FMEA (failure mode and effects analysis) on the test jig itself. This is yet another demand on an engineer's time and a further delay in the release of a product.
Something that occurred to me (and I hope our ISO auditor isn't reading this) is how do you know that your test jig isn't damaging your DUT in some way, and how would you prove that it isn't? Coincidentally, at exactly the same time that we were audited, Martin Rowe posted an excellent article with some ideas on these approaches.
So, how do you test your tester?