Let's assume this pruning is only for new process development, not mature processes. In that early phases of new device development, fab wafer-lot starts are rationed as then are the sub-lot wafers per technology node split. Please name the group who would like to see the new device development wafers terminated mid-stream ...device engineers, yield enhancement, thin films, metalization, etch, clean-up, reliability. There is a wealth of information to be gathered from fully processed (but dead) wafers by various groups. Just because the transistors are dead does not mean the wafers are profitably terminated early. OTOH, I may have said too much to IBM...
I kind of have the same feeling, we've been doing M1 testing on devices for as long as I have been involved in semicon manufacturing (~15yrs). And yes, we use it to scrap wafers before going through the metal steps.
I too am lost on the many details this article omits! I agree with @ebmfuser, the methodology of using test structures in the street for yield prediction for ages now. The article doesn't how 'early' in the manufacturing process the wafers are going to be held for probing -at contact? Or, at M1 as @daleste suggested? Is the fabrication of interconnects (contact to M1) required for this IBM's 'pruning' process?
I must be missing something as well. Process Control Monitor Dropins (PCM's), either in the scribe streets or a device cell, have been used as parametric data generator and/or a yield predictor for decades - especially in the compound semiconductor world. Sounds to me like IBM is simply going back to the 'old ways' and trying to make it sound as if they just invented the concept.
I must be missing something here.. Current scribe grid test structures aren't used "just to measure process drift". Any self respecting company should have correlations to yield and identified key parametric structures for each product to enable data driven "pruning" decisions based on M1 data.
Notice that it is design dependendent and is pruning based on power levels and performance - in other words if the transistors aren't low power enough or fast enough to meet design requirments then it's not worth putting the metals on. This can address yield issues with new processes or pushing the envelope issues with new designs.
You can't test them until metal 1. That may be about half way thru the process, but the metal layers are more expensive and time consuming. This seems like a good idea for new evolving processes. Mature processes just don't make bad wafers.
I don't buy this manufacturing philosophy. If it makes financial sense to prune entire 300mm wafers mid-line, then they have significant internal process yield problems. Such wafers should be rare, rather than a cost saving opportunity. I don't know how many times I have heard product engineers telling us not to scrap known-dead split wafers so they could later pull parametric data off them at the end of the line. In fairness, I"d like to see the math that supports this approach.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.