The rest of the industry may need to wait until 2.5-D and 3-D chip stacks are ready for prime time before they can catch up with eDRAM. That won’t happen anytime soon, according to Kevin Zhang, an Intel fellow who lead the eDRAM design and presented a talk on it at the VLSI Symposium.
3-D ICs with through silicon vias “can solve some of the challenging issues facing us like memory bandwidth, but we need to figure out how to do it and lower its cost,” Zhang said, adding it’s not clear when that will happen.
Intel has yet to reveal its plans for chip stacks. Earlier this year Nvidia said it could experiment with them as early as 2014. Micron said it will start shipping this year its Hybrid Memory Cubes, stacks of flash dice with a logic interface.
To craft its eDRAM, Intel created a picoamp-class access transistors, drawing “three orders of magnitude less power than the typical logic transistor,” Zhang said.
Intel used the backend dielectric stack in its 22nm process to design capacitors that store charge while preserving logic performance characteristics. The resulting cells have GHz class performance, leapfrogging the today’s MHz DRAMs. Intel hopes to share more about the data rates it supports in an ISSCC paper next year.
IBM used a very high aspect ratio deep trench at the substrate level to create eDRAM cells it puts on its processor dice. By contrast, Intel uses the dielectric stack, getting results that are both similar and different.
IBM reported late last year a 0.026 mm2 eDRAM cell size, slightly smaller than the 0.029 mm2 size of Intel’s cells. However Intel claims its 17.5Mbits/mm2 arrays are denser than those of IBM. The two companies have similar retention rates, Zhang said.
Intel is shipping products with these eDRAM cells this year. It’s not clear when IBM will ship chips using its latest eDRAM process, Zhang said.
Ah, the stuff you can do when you control your own fab process!
Intel dropped DRAM - their original product line - over 30 years ago because they were going to get clocked by the Japanese.
IBM's eDRAM really opened up single-chip architecture as eDRAM fabbed with a logic process enables an attached processor to really scream. Logic transistors are generally crap for analog, but if you own the process you can compensate.
TSVs? Gotta have 'em. As signal paths, they aren't as good as on-chip contacts, but they beat a package any day. Pretty good for cooling, too.
Once you have eDRAM and TSVs together, you can get more speed by implementing a honkin' wide bus with fewer buffer stages.
Intel is REALLY good at watching research, then incorporating results into its new devices. They (ahem) borrowed heavily from a project I worked on to create the super-wide high-speed networks in Ivy Bridge and Haswell.
The other thing to remember about Intel is that they almost certainly have stuff already working in the lab RIGHT NOW that is probably 4-5 years ahead of the commercial state of the art, and are figuring out how to manufacture stuff for the 2020 market. They are really great at maturing technologies before releasing them, allowing their investments on present-day technologies to be paid for.
I can hardly wait to see what path they'll follow to bring carbon into the market. That, I gar-on-tee, will change everything. I don't know if they'll be the first, but they'll be close.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.