PALM SPRINGS, Calif. A Rambus DRAM Implementers Forum, announced Tuesday (Aug. 31) here at the Fall Intel Developer Forum (IDF), will consider ways to trim costs associated with the Rambus memory architecture, including taking a common approach to reducing the number of memory banks in the current Direct RDRAM design.
Intel and Rambus Inc. (Mountain View, Calif.) have worked on a one-to-one basis with a growing number of RDRAM suppliers, but the implementers forum is intended to come up with commonly accepted ways to reduce costs.
Packaging, test, and die size reduction issues will be among the issues taken up by the group. A less well-known cost adder is the number of memory banks, which currently stands at two sets of 16 dependent banks on each RDRAM device.
Peter MacWilliams, an Intel fellow, said the implementers forum may come up with one or more commonly accepted RDRAM designs with fewer memory banks, which would then be supported by iterations of the Intel chip sets. One suggestion has been to reduce the number of memory banks per chip to a single set of 16 dependent banks, another calls for four independent banks.
The final decision on memory banks "is up to each vendor, and over time Intel would build chip sets that would support different approaches. One of the key things we want to accomplish with this implementers forum is to have the memory vendors have discussions among themselves about how to optimize better for costs, and not just with Intel."
Three years ago, Rambus engineers thought that a higher number of banks would allow the Rambus technology to offer on-chip concurrency, fetching bits from multiple banks at the same time. The goal was to reduce first-access latency, an issue that dogged RDRAM's performance in the early going.
Geoff Tate, the president of Rambus, said that over the past few years, the RDRAM vendors and Rambus have come to realize that having so many on-chip memory banks made it more difficult to efficiently lay out the redundancy bits needed to keep yields at respectable levels. Adding redundancy bits for each bank added to the die size, increasing costs. Cutting redundancy bits can also raise costs by reducing yields.
"If we know then what we know now about the die size impact, we would have decided on fewer banks. But the issue is not just reducing the number of banks, it is making sure there is compatibility among the various DRAM vendors," Tate said.
The implementers forum includes Hyundai Electronics, Micron Technology, NEC, Samsung Electronics, Infineon Technologies, Toshiba, as well as Intel and Rambus. System vendors, such as Dell Computer, also may provide input.
Tate said he expects the issue of the number of memory banks to be decided in the next few months. As vendors create new mask sets for shrink versions of the various 128-Mbit RDRAM implementations, the reduced-bank design also would be implemented.
Full speed goal tough
The RDRAM technology offers 1.6-GB/s memory bandwidth, but cost issues and difficulties with the Camino chipset have caused delays. A Samsung Electronics manager said Samsung is hoping to reduce the premium to 25 percent over SDRAMs by the end of next year.
An NEC manager said NEC is starting its commercial ramp now, but yields on the full-speed 800-Mbits-per-second-per pin RDRAMs are only in the 30 percent range now. NEC will use a 0.20-micron process for its 128-Mbit density RDRAMs, but to improve the full-speed RDRAM yields to a more respectable rate, NEC may need to apply a 0.18-micron process.
While the DRAM vendors strive to reduce costs, the personal computer makers are working to sort out remaining issues with motherboard production, such as the placement of capacitors on the board. Those must be sorted out by October, when Intel expects major OEMs to begin commercial introduction of performance desktops based on Rambus DRAMs.
Tate said the RDRAM vendors and Rambus are coming up with various circuit tweaks that will increase the percentage of full-speed RDRAMs. And as testers become more sophisticated, fewer RDRAMs will fall into the 600 and 700 MHz speed bins, he said.
Also, RDRAM vendors will naturally move to 0.18-micron processes to increase the number of raw die per wafer. The combination of tighter process rules, and circuit improvements, will allow Rambus to approach its goal of 100 percent 800-Mbit production.
Jay Bell, senior fellow at Dell Computer (Austin) said Dell is "generally ready" to introduce a line of RDRAM desktops this fall, estimating that "$1500 is not an unreasonable goal" for unit pricing.
"Dell mainly ships to the corporate market, and the IT managers are looking to Rambus-based systems for highly networked corporate environments based on Microsoft's Windows 2000 operating system. One situation where Rambus brings advantages is with the telemirror function, where an IT manager can dedicate bandwidth to backing up system settings and data. By bringing that across the network while the user is working with the system, there is less system performance degradation," Bell said.
Bell said he has benchmarked SDRAM and Rambus systems running Microsoft's Office 2000 suite. When an imaginary IT manager takes control of a portion of the system using a Laplink utility, the SDRAM-based system suffered an 85 percent system degradation, while the RDRAM-based system had a 67 percent degradation.
As IT managers turn to multiple bus masters on networked systems, the ability to access multiple channels of memory will prove an asset in the corporate environment, he said.
Also, as Internet communications increase, "sharing packets and accelerating transactions are areas when the Rambus technology excels," Bell added.
Will those kinds of performance improvements entice companies to buy Rambus-based systems? Pat Gelsinger, Intel vice president for desktop platforms, said "we are trying to deliver technology that will give them some headroom for the next few years."