The testing Intel did was not that good. I would argue Intel should do a much better job on their chip sets in general and that Intel management and marketing is now regretting their poor decision making. Intel's problem in this case is quite serious. Nobody is going to like potential data corruption on their hard drives. I am quite surprised Intel did not announce a full recall.
The problem was Intel took LVT cells in the clock tree for the 3G SATA controller and biased the substrate of them further for speed in revision B silicon. The revision A silicon did not have this issue. Another related issue is Intel should have had the Z68 chipset ready for the launch of Sandy Bridge. Much of this information is already in the public domain on sites such as Anandtech. Overall, I am fairly disappointed by Intel.
More than likely the problem was related to a lithography/etch margin issue at that metal level. Problems like these only happen when multiple factors drift like focus, reflectivity, planarization etc causing a notching/thinning of the metal trace width.
Design verification would not catch such a low probability event. This margin is maintained/eliminated by good fab process tool controls. Still, they would tweak the mask to add as much margin as possible.Then, perhaps add another design rule
My experience we TI is frustrating me at the moment (Chipcon part). I believe I have found a reliability issue, but they are giving me the run around. Suggesting it is caused by silly things like bad joints, when inspection as well as the mode of the failure clearly indicates otherwise.
TI is a very diverse company though, so I imagine the response would vary depending on the group you are dealing with.
Both of these statements cant be true!!!
Intel mentioned that after it had built over 100,000 chipsets it started to get some complaints from its customers about failures.
Intel expects that over 3 years of use it would see a failure rate of approximately 5 - 15% depending on usage model. Remember this problem isn’t a functional issue but rather one of those nasty statistical issues, so by nature it should take time to show up in large numbers (at the same time there should still be some very isolated incidents of failure early on).
Thanks Tom Mariner,
"If they claim it ain't the silicon, I'm looking elsewhere". A new classic quote!
I assume the writer means that if the supplier doesn't admit there's a problem with the silicon, the customer should look elsewhere for a better, more honest, chip supplier.
Since it is only degradation, may not be e-migration. Anyone remember the "Fast Cadillac" reliability problem with a small percentage of Delco's first cruise control chips? Cause was a mask defect on a contact print mask.
There seems to be a grand tradition in the chip design world of fessing up to your boo boos. Possibly because in the future when you say it is not in your section of the IC, you will be believed.
Once found a problem in earlier layers of a TI DSP chip -- it seems as though noone had written software that used the entire chip at once in the three years it had been released. (If I don't give my company / customer the best the hardware will do, it leaves an opening for a competitor to them, and I don't let my customers lose!) They could have pointed the finger at me for a firmware glitch, but instead thanked me in front of my customer and put the fix into a wafer partially done to get the revised parts out in record time.
Class tells -- and in both the Intel and TI cases, it tells me that if they claim it ain't the silicon, I'm looking elsewhere.
Reliability issues are tough to catch unless there is significant design reviews and all. It can be easy for large teams to assume someone else has checked this or that. Can sneak up and infect the best of teams. Electromigration and/or NBTI are my best guess what they are dealing with but we may not know the details for a while. Those are tricky and many of the tool will not adequately predict the outcome.
The previous instance when Intel had this kind of bug was in 1994 (the infamous FPU bug). I guess intel has learnt lesson and didnt wanted to take chance this time around. Hence they are taking necessary steps rather than ignoring the bug.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.