The IC world (especially with memory) is in a way like a dog chasing it's tail. Clever new architectures and enhancements provide higher levels of achievable performances. The availability of higher performance parts opens the door for new applications that just weren't feasible with lower performance devices. These new applications then demand more performance which drives the development of more new and improved technology.
Certain target applications drive the development and deployment of innovative architectures. PC's were a big incentive to develop and refine high speed L2 and L3 cache architectures, graphics RAM, and, also provided a good platform for the development and growth of synchronous SDRAM like pipeline burst and EDO. Telecom and networking have pushed double and quad data rate devices to the forefront.
IC manufacturers have done a good job integrating memory onto FPGA's and ASIC's, but, often times, vaster amounts are needed to give that performance edge or capability. That's where bolt on memory kicks in.
It is routine now for microprocessors, and other advanced chips like network processors, routers, search engines, switches, and so on, to have a variety of synchronous memory interfaces. The chip manufacturers will pick the interface included on their chips, which force us to use a specific type of memory. That's not so bad if they have done their homework and their choice doesn't limit the functional performance.
But, FPGA and ASIC designers too want bolt on memory solutions that push the technological envelope. We want to make the choice as to what type of memory we want to bolt on, how much of it, how fast, and how we use it.
Unfortunately, we need to focus on what we do best. That is why we choose an ASIC or FPGA in the first place. We feel we have a better mousetrap, and we want the tools and resources to do it.
That's where the IP cores come in. Not everyone can be an expert on everything. Not every company has the expertise on all areas on staff. And, not many memory interface designers are out and about for us to question.
This month we have two new key steps forward in our bag of tricks I want to alert you to. These are new and advanced high performance memory interface cores for FPGA's and ASIC's. I expect to see this trend continue.
One of this months news worthy items is from TriCN, a leading of intellectual property (IP) for high-speed semiconductor interface technology. TriCN announced a new Reduced Latency (RL) DRAM II interface. This isn't their first, but it is the latest member of its Interface Specific I/O (ISI/O) family.
I like how the interface is capable of up to 800 Mb/s operation. While most won't need that much horsepower, some will. Other will grow into it.
The RLDRAM II interface provides the benefits of fairly fast random Read/Write cycle times. Not as fast as a true asynchronous Random Access Memory in random mode, but faster and without as much penalty as QDR and other non interruptible block transfer standards.
Where it shines is when it's blasting data in synchronous modes. The high speed operation uses a double data rate I/O approach to double bandwidth. The clocking and I/O structures are in place as well simplifying the external interface as well as the interface to custom logic. It is also backward compatible with the original RLDRAM interface from the company.
The 800 Mb/sec RLDRAM core implements High Speed Transistor Logic (HSTL)-18 I/O and is presently available in the TSMC 0.13um process.
With similar philosophy and intent, a collaboration between Altera and Micron resulted in the newly announced 'industry first' DDR400 SDRAM DIMM Interface for FPGAs. With Altera's Stratix and Stratix GX high-performance FPGAs as target parts, as well as the Micron's DDR400 SDRAM DIMMs.
According to the companies, the 400-Mbps interface speed enables the implementation of memory interfaces with bandwidths up to 25.6 Gbps. That's a 64 bit bus running at 400 Mb/sec.
Key in this implementation is the built-in DQS phase shift circuitry found only in Altera's parts. The DQS phase shift circuitry in Stratix and Stratix GX devices automatically shifts the DQS signal 90 degrees which is the optimal phase shift solution. This helps compensate for varying signal trace lengths and board tweaks to line everything up.
Other factors that make this marriage work are six registers in the I/O element for high-speed DDR signaling, SSTL I/O standard support for interfacing with DDR SDRAMs, feature-rich PLLs for efficient clock management, and a robust clock distribution network for minimizing skew between clock and data channels.
Very important in letting us build that better mousetrap is the evaluation and design aids. A PCI Development Kit, (Stratix Edition), which comes with a DDR400 SODIMM interface, and the Stratix GX Development Kit, which comes with a DDR400 DIMM can be linked to let us verify and modify.
Denali Memory Model's Advance Verification (MMAV) were used in this cores development and verification and Altera provides an evaluation version of Denali's MMAV models free of charge. SPICE and IBIS simulation models are also readily available as is 'extensive' technical documentation, characterization reports, software and tool support, development boards and intellectual property (IP) cores.
As the building blocks like these we have to play with get better, more advanced applications we haven't yet even conceived of will emerge. These will take advantage of the highest performance components we have so far. As these new applications get refined and take on more features, they will need yet more performance from the memory manufacturers. Wuff.