Micron Technology's Dean Klein backs the idea of boosting the performance of systems-whether it's a PC or a Gigabit Ethernet controller-by combining logic with DRAM on the same chip. The vice president of integrated products for the Boise, Idaho, memory manufacturer, thinks this is the way to go, instead of relying solely on ever more complex, faster processors with greater amounts of SRAM and faster external memory. Getting there will require expertise in both processes under one roof.
Klein said makers of peripheral components for PCs, communications and networking equipment, and industrial and automotive systems are taking a fresh look at embedded DRAM. He believes there are some 2,000 applications that can use up to 8 Mbytes of embedded DRAM attached to a million gates of logic.
|Dean Klein is betting that enough improvements have been made to logic so that the focus should be to pack more memory into smaller spaces. |
"If you want to do a gigabit memory and 1 percent logic, you can," Klein said. "The practical limit to embedded DRAM is going to be determined by yield, and the cores are going to be as dense as commodity memory."
Klein said Micron has made advances on both yield and density, and is confident enough to seek new business for its homegrown embedded-DRAM process technology. While the logic portion will use 0.18-micron design rules, Micron will add the DRAM using more-advanced 0.15-micron design rules. That will yield 1 Mbyte of DRAM on a 1 x 4-mm area of silicon. The company has tested the process on a prototype graphics controller that included 3.5 million logic transistors and 12 Mbytes of embedded DRAM, a device that Klein said is "not an unreasonable part at all."
More recently, some memory vendors have been looking for ways to improve the logic performance of devices with on-chip DRAM, which usually means giving up some DRAM density. Klein, however, thinks sufficient improvements have been made to logic, and that the focus should be on packing more memory bits into a smaller space.
Many companies try to improve logic first, "because their ASIC teams are more familiar with logic libraries," he said. "They're going to have a speed advantage, not a density advantage." But at Micron, "We think density is where it is today," Klein said. "If you can embed enough memory in an app, then speed isn't an issue."
To Klein, embedded DRAM offers a unique opportunity to stay focused on density and at the same time improve system performance, regardless of how many additional pipeline stages or new transistor tweaks the latest processor incorporates.
Klein also said there are ways to design processors that dovetail with on-chip DRAM. "We think there is something you can do to have flash-cache fills," he said. "Instead of filling cache lines in four cycles, you can do it in one. This is something we would do with a processor vendor."
Digital signal processors are also within scope. Normally, it would be difficult for DSPs to use embedded DRAM because the precharge latencies and random refresh cycles would trip up their real-time-processing operating speed. "DSPs use SRAM because the code loops of a DSP have to be stable and predictable," Klein said. "But if you were architecting to use embedded DRAM, you could eliminate the SRAM if you could bring in branch prediction and a DRAM controller to hide the refresh and precharge. A tiny SRAM could be used just to cache while t he page cache would be the line cache."
Embedded-systems designers are often forced to use more memory than they need. "Microsoft operating systems require twice as much memory with every release, to which we are eternally grateful to Microsoft," Klein said. "But there are lots of applications outside the PC that are wasting memory devices. If you need 3.2 Mbytes for a DVD player with a straightforward GUI, you're stuck with 2 or 4 or 8 Mbytes. In another five years it is still only going to need 2 to 8 Mbytes, and by that time you can bet you're not going to be building 8-Mbyte memory chips."
Embedded DRAM can also be seen as a useful space filler for logic designs that have lots of empty "white space" created when there are a high number of pads surrounding the die. And replacing SRAM with denser DRAM can also reduce the die size enough to boost wafer yields, Klein said.
"If you can increase the number of CPUs on a wafer by 20 percent by reducing the cache size 50 percent, that might be a real good trade-off," Klein said. And with the logic processes getting better with each generation of embedded DRAM, there's now more incentive for companies to consider the technology than there was five years ago.
"In most cases embedded DRAM [previously] wasn't up to the task," Klein said. "There were compromises made and it really fell out of grace after that. But now it's back with a vengeance, largely because the logic process you can put on a DRAM process can be fairly high performance nowadays."