Managing test capacity is easy in low volume production, but gets complex as volumes increase.
I recently highlighted the major stakeholder groups in the test industry that care about test capacity and how it's managed: the test specifier, test provider, test equipment manufacturer, and third-party supplier. Over my long career in the test industry, I've been fortunate to have had the opportunity to think about test capacity and its management challenges from the perspective of each of these groups.
Although I saw significant and exciting innovation in ATE and test techniques over those 27 years, I saw relatively limited development of test capacity management tools and methodologies. My next few posts will provide a chronological review of some of my test capacity management experiences to see if you agree.
My early years as a test engineer at Hewlett-Packard had no test capacity management challenges. As part of a low-volume custom bipolar ASIC design center, we did all our test engineering and production on just one Sentry Series 80 tester. We were essentially both the test specifier and test provider in this case, but with access to only a single fixed unit of test capacity. Our challenge then had more to do with testing a 2 GHz bandwidth, 500 Msps oscilloscope front-end sampling device on a 20 MHz tester. (Answer: rack-n-stack like crazy and develop 3 GHz probecard technology).
When you have more than one ATE station, test management becomes important.
When I joined Teradyne and relocated to Austin, Texas in 1995 to primarily work with Freescale (then Motorola's Semiconductor Products Sector), I saw for the first time the world of large device portfolios and high-volume manufacturing. Along with that world came test capacity management challenges I hadn't experienced before. As Freescale built up its installed base of A5-Series testers and transitioned to the new Catalyst platform for their analog and mixed-signal products, we spent a significant amount of time together planning and managing the ATE configurations that would be used worldwide by this multi-national company.
Proactively developing a set of ATE configurations that aligned with the long-term device roadmap is common test capacity management practice now, but was actually not as prevalent for earlier generations of ATE. The proud owner of hundreds of tester platforms at the time, Freescale, like other large semiconductor companies, got to that point by allowing its various product groups to independently specify ATE based largely on just the technical requirements of their new device or, at best, family of devices. And, test equipment manufacturers such as Teradyne happily provided specific solutions for each device.
This strategy was eventually disrupted by the realization that test capacity utilization was an important cost-of-test lever that must be managed more closely to effectively compete in the growing global semiconductor industry. Coupled with this trend was the advent of architectural innovations in ATE that allowed for a much wider range of configuration options and thus demanded a more careful configuration assessment and plan.
So, what compelled you to first start prioritizing and performing long-term ATE configuration planning? When did it happen for you? In Part 2, I'll continue my discussion of this first major movement in test capacity management, focusing on the tools and methods we used to perform this work.