A memory company says: we can make EDA applications run faster. That is the claim for a press release and white paper that I was sent the other day, so I wanted to find out what this was all about. They start to present the facts such as processor speeds growing at 60% per year, with hard drive capacities keeping pace but memory performance only growing at 9% per year. The number of cores increasing etc. As a reference they turn to DDR3 which they say can transfer up to 10.6 GB of data per second.
So what are these new memories? In their words: HyperCloud (HCDIMM) and Load Reduced (LRDIMM) DIMMs take the [memory buffering] register to another level, and buffer all signals (e.g., control, address, and data). This reduces the electrical loads on the CPU bus, allowing higher speeds. LRDIMMs have one large chip that buffers all signals, where HCDIMMs distribute the task to a main controller that handles the control and address signals, and separate, smaller buffers for the data lines.
The paper then goes on to talk about memory configurations and restrictions imposed by a typical motherboard equipped with an Intel Sandy Bridge processor. Netlist says they have developed a new memory technology, known as HyperCloud memory that utilizes as ASIC chipset to perform rank multiplication and load reduction.
Rank Multiplication - Rank multiplication increases memory capacity in servers. The rank multiplication functionality enables 4-physical ranks to be presented as 2 virtual ranks to the CPU. Three 2-virtualRank DIMMs can be populated per channel with rank multiplication, thus enabling population of the third slot on the memory channel using 4-rank memory.
Load Reduction - Load reduction increases memory bandwidth in servers. The load reduction functionality “cleans” up the distortions in the digital signal by reducing channel loading thereby allowing the CPU to maintain the 1333 MT/s speed. Three DIMMs can be populated in a channel while maintaining the 1333 MT/s on each channel. Now 384GB can [be] loaded in a dual socket server running at 1333 MT/s.
They observed the effect of caches can by varying the block size of the memory tests (as shown on the X-axis, in Kb/block). This particular CPU has three levels of on-chip cache, which can be seen with performance drops near 32KB, 256KB, and 4MB. In addition, the interaction between multiple cores can be seen, as the relative speeds converge when accessing main memory. The results are shown below.
The bottom line claim is that HCDIMM memories are able to continue performing at 1333 MT/s while other memories would be limited to 1066 MT/s. This can create up to a 25% difference in speed although they state that most servers are likely to see a 15% difference. They identify block level (physical) verification and implementation as being the EDA tasks that are most memory hungry which they say will see an 8% increase in overall performance.
So how much would you pay for an 8% performance increase in your memory constrained EDA runs? Netlist hopes the equation is enticing enough.
Additional information on Netlist's HyperCloud technology can be found here Brian Bailey
– keeping you covered
If you found this article to be of interest, visit EDA Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to all aspects of Electronic Design Automation (EDA).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for the EDA Designline weekly newsletter – just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you).