In 2007, Gene Frantz, Chau Mai and Ivan Garcia of Texas Instruments published a white paper in recognition of the fact that with ICs, we can no longer expect performance to naturally increase or power dissipation to decrease as readily as a function of the advances in process technology.
To quote the overview: “We need to find new ways to satisfy our continuing demand for more performance and to achieve that performance at a lower power level. By understanding the concepts of dependencies and guard bands, you can uncover hidden performance in your devices.”
We kick off this two-part series with that paper (click here to download PDF). In it, the trio explain how performance and power dissipation depend on other variables under your control, as well as how IC manufacturers use guard bands to guarantee the performance of their products.
Armed with these details behind the datasheet specification, they show you how you can create the
product you need—even when the data sheet says it is impossible!
Of course, it’s not that simple. In Part 2 Frantz and Garcia will add to their original paper with some considerations you will need to understand if you attempt to use integrated circuits outside of the data sheet. Stay tuned. I should have it up later this week (week of Sept. 20th).
Duane, I echo your sentiment, regarding TI's courage to publish this and I am excited to have just posted Part 2 of this two-parter here: http://www.eetimes.com/design/signal-processing-dsp/4209479/Go-beyond-the-datasheet--Part-2--Understand-the-considerations
Ivan and Gene go deeper the design considerations and how best to go about reaping the rewards of pushing the envelope. It comes with a disclaimer, however. As it should. Enjoy!
I suspect that such practice was pretty common well before we got to the point: "we can no longer expect performance to naturally increase or power dissipation to decrease..."
In the company I worked for in the early 90's, we had PLL issues and regularly ran them past the data sheet specifications. Of course, sometimes that did lead to problems, but by and large, it allowed the company to deliver a higher-performance product than would otherwise be available.
I can certainly understand the dilemma faced by component manufacturers. They want their parts to be chosen even in the most demanding of applications, but without that built-in headroom, they just don't know if there will be problems or not. The application may be completely viable, but their customers are in test-pilot mode.
We have to deal with the same thing here at Screaming Circuits. We can do an awful lot more than we promise, but outside of those promised parameters, the unknown rears its head and makes 100% certainty not realistic.
It's a bold and brave move for Ti to release that white paper. Most companies won't do such a thing for fear that people won't read the disclaimers and will end up angry. I salute Ti for publishing it.
To use the component outside any data-sheet limit, the customer needs to rework the characterization. It is very complex procedure. For example, even the manufacturer needs a few months to characterize a typical 32-bit network processor.
The customer also need to repeat the characterization in case of the changed device revision, or changing the semiconductor process tuning. The manufacturer doesn't notify the customer about second type of the event!
Why the customer needs to invest so much effort (instead of the manufacturer) with risk of reliability loss and other issues?
The good engineering practice is to provide enough guard before reaching the data-sheet limits.
While going beyond the datasheet (overclocking) is exciting and will work for custom applications, engineers clearly face liability issues if something should fail, so they're obliged not to take 'risks' -- for their own sake and the sake of the company they work for. But again, it's exciting to operate at the envelope, where appropriate. What's been your experience?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for todays commercial processor giants such as Intel, ARM and Imagination Technologies.