SAN JOSE, Calif. – Intel’s new embedded DRAM technology is expected to compete favorably with discrete graphics chips in high-end notebooks this year and later appear in sever. The x86 giant described at the recent VLSI Symposium the technology its sees as a forerunner to 3-D ICs.
Using its eDRAM technology, Intel built a discrete 128 Mbyte L4 cache chip with a 100 microsecond retention time at a worst case 95 degrees C. It will fit in a Haswell package linking to the CPU die via a 100 Gbyte/second point-to-point link, adding about 3W to the component.
The technology reaches “part of the design space you can’t hit with commodity DRAM,” such as GDDR-5 chips that would offer half the bandwidth and consume more power, said David Kanter, microprocessor analyst for Real World Tech.
OEMs including Apple are expected to use in their top-end notebooks the Haswell eDRAM module at about 45W to save space and power while sacrificing little performance. It will replace a combo of discrete graphics chips dissipating about 40W and existing processors drawing 30W.
“This is pretty attractive for premium notebooks, and I think we will see it in servers next,” said Kanter.
IBM pioneered the use of eDRAM in a logic process, packing up to 80 Mbytes cache on its Power chips for high-end servers. Intel and IBM may have a unique ability to field the technology which requires advances in both process and circuit design.
Ah, the stuff you can do when you control your own fab process!
Intel dropped DRAM - their original product line - over 30 years ago because they were going to get clocked by the Japanese.
IBM's eDRAM really opened up single-chip architecture as eDRAM fabbed with a logic process enables an attached processor to really scream. Logic transistors are generally crap for analog, but if you own the process you can compensate.
TSVs? Gotta have 'em. As signal paths, they aren't as good as on-chip contacts, but they beat a package any day. Pretty good for cooling, too.
Once you have eDRAM and TSVs together, you can get more speed by implementing a honkin' wide bus with fewer buffer stages.
Intel is REALLY good at watching research, then incorporating results into its new devices. They (ahem) borrowed heavily from a project I worked on to create the super-wide high-speed networks in Ivy Bridge and Haswell.
The other thing to remember about Intel is that they almost certainly have stuff already working in the lab RIGHT NOW that is probably 4-5 years ahead of the commercial state of the art, and are figuring out how to manufacture stuff for the 2020 market. They are really great at maturing technologies before releasing them, allowing their investments on present-day technologies to be paid for.
I can hardly wait to see what path they'll follow to bring carbon into the market. That, I gar-on-tee, will change everything. I don't know if they'll be the first, but they'll be close.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.