Networking Memories--Packet Processing, Page 2.
In order to use a banked memory architecture DRAM based device the operation would be severely constrained by the random access time of the device. One work-around to this problem is to segment the memory into two copies for a state entry, referred to as load balancing; first reading from one then writing back to the other and next time reading and writing the opposite sequence.
If the resulting memory access rate is still insufficient it is possible to further segment the memory with additional copies such that any lookup and write back can be performed from an available memory location. Each copy of a table reduces the available capacity accordingly and in practical implementations this strategy results in diminishing performance returns for each subsequent copy as shown in Table 6.
In this application the Bandwidth Engine would utilize one write port and one read port, enabling simultaneous read/write operation to any random address within the memory partition and maintains data coherency under all access patterns. Other applications within the packet header processing category are potentially Queuing/Scheduling, Statistics/Counting, Metering/Policing and Traffic Management all of which benefit from the high access rate and deterministic latency of the QDR SRAM or the Bandwidth Engine Access device. One area in particular somewhat orthogonal to header processing is Deep Packet Inspection (DPI). DPI is a memory intensive recursive operation which examines the payload of a packet rather than the header.
Deep Packet Inspection (and subsequent filtering) enables advanced network management, user service, and security functions. Out of necessity DPI cannot practically be performed on 100% of the network traffic at the most advanced packet processing rates. DPI will be deployed at points in the network where inspection is logistically feasible. Nevertheless DPI can benefit from higher access rates and lower latencies of the more advanced memory options.
Networking performance requirements continue to increase at a rate faster than the advancement of traditional semiconductor memories. In order to keep pace continuous innovation has been and continues to be needed.
Specialty networking memories, derived from their commodity parents emerged in the last decade but have reached the limit of their capabilities for next generation platforms.
In order to move beyond these limitations it is necessary to rethink memory array architecture and performance, transition the IO from parallel to serial and consider purpose built optimizations to offload the host and enhance system level performance.
MoSys Bandwidth Engine family of products delivers on these requirements resulting in the highest performance single chip networking memory solution available today.
About the Author
Michael Sporer brings over 20 years of marketing, sales and engineering experience to MoSys. Prior to joining MoSys, he was a Technology Strategist for Micron Technology, an industry leader in semiconductor memory products. Previously, he was Director of Technical Marketing at LG Semicon and with Hewlett Packard in the Memory Technology Center. Mr. Sporer holds a Masters of Science in Engineering from Stanford University, and a Bachelor’s degree from the University of Michigan.