The choice of the right synchronous static random access memory (SRAM) is crucial for networking applications that have increased bandwidth requirements for better system performance. System designers need to be aware of the features and advantages of different synchronous SRAMs technologies to make the right memory selection for their application.
Some of the critical factors that determine the right synchronous SRAM choice are density, latency, speed, read/write ratio, and power. By understanding how these factors impact performance, reliability, and cost, designers can select the optimal synchronous SRAM for their application.
Synchronous SRAM is available in a variety of forms with different performance characteristics and benefits (see figure 1). Standard synchronous SRAM is commonly used in industrial electronics, instrumentation, and military applications. The devices are often used as data buffers (temporary storage) and can be accessed randomly through their high speed, single data rate (SDR) interfaces. Standard synchronous burst SRAMs are ideal for dominated read or write operations. Customers have a choice between flow-through (FT) or pipelined (PL) architectures with user selectable linear and interleaved burst modes, as well as single-cycle deselect (SCD) and double-cycle deselect (DCD) options.
Figure 1: Synchronous SRAM flavors where Std.Sync is standard synchronous, FT is flow through, PL is pipelined, SCD is single-cycle deselect, DCD is double-cycle deselect, QDR is quad data rate, and DDR is double data rate.
Flow-through SRAM devices have a register on the input. On the rising edge of the clock, the address and control for the device is captured. During a read operation, the requested data is allowed to flow through to the output of the device and thus is available in the first cycle. During a write operation, the address, control, and data are captured on the same rising edge of the clock cycle as shown.
The difference between a pipelined and flow-through SRAM is that a pipelined device has a register on the input of the device as well as a register on the output. During a read operation, the data is allowed to flow through to the output register of the pipeline device. On the next clock cycle, the data is then latched out of the device. This is different from a flow-through device because the data from a pipelined SRAM comes out one cycle later than for a flow-through device. A pipelined device is able to run at a higher frequency than a flow-through device, however, because the accesses are broken up over several clock cycles. Write operations for a pipeline device are the same as for a flow-through device.
Pipelined SRAMs are offered with SCD and DCD. This determines how long it will take for the data bus to go into a tristate once the device is deselected. Tristate is defined as a transmission line set to its high impedance state.
- SCD: The I/O bus will be tristated one clock cycle after the chip is deselected.
- DCD: The I/O bus is disabled two clock cycles after the chip is deselected.
To summarize, pipelined SRAMs operate at a higher frequency than flow-through SRAMs. If initial latency is a critical requirement for the application, then a flow-through device is more suitable. If speed is critical, then a pipelined device is more suitable.
If your system has a read/write ratio of one, then standard synchronous flow-through and pipelined SRAMs are not suitable. A no-bus-latency (NoBL) SRAM represents a better choice.