While this example uses an XOR function, there is a general class of erasure codes that may be used to allow multiple data reads. Erasure codes encode a set of N
data bits into a larger set of N
bits that allow a system to recover a subset of the N
data bits if that subset of N
data bits becomes unavailable, usually as a result of being lost or corrupted. Such erasure codes are commonly used in encoding data for transmission across an unreliable channel.
In algorithmic memory, a set of data bits of the memory banks can be viewed as the original N
data bits and the X
data bits as the data stored in the extra memory bank(s) using the erasure code. Thus, when a subset of the N
data bits becomes unavailable (e.g. a memory bank containing a subset of the N
bits is being accessed for a simultaneous read operation), that subset of data bits can be reconstructed using X
data bits in the extra memory bank and the remaining data bits for the set of N
data bits. In this manner, any erasure coding system that allows the full reconstruction of data bits from an unavailable memory bank (a memory bank blocked due to another memory access to that memory bank) may be used to encode the data in the extra memory bank.
The encoding system for a particular application should use minimal resources and guarantee that data can be recovered with a prescribed maximum time period. In any case, it must offer extremely fast decoding such that there is zero clock cycle latency for all memory operations. There may be several different encoding systems that can be used depending on the desired properties. Examples of erasure coding systems for algorithmic memories include Reed-Solomon coding, maximum distance separable (MDS) codes, and Galois fields. In the typical case, these codes are modified, and both the data and the encoded values are suitably arranged to fit the needs of a specific algorithmic memory implementation. Some codes may require more layout area but provide faster results. Coding systems that do not guarantee the exact same data to be recovered or take too long to return a result would not be used.
Algorithmic memories employ more than a dozen techniques to intelligently handle read and write memory accesses with guaranteed performance. Some of these techniques are well known in the industry. The greatest benefits of algorithmic memory come not from the individual algorithms, however, but rather in how they are integrated into a system (see figure 3). In these systems, the memories not only perform better, but their performance is fully deterministic. Not only can new memories be created very rapidly, but they are also automatically formally verified.
Click image to enlarge.
Figure 3: Physical one-port memory can be used to build any multiport functionality.
Algorithmic memory gives memory architects a powerful tool to create the exact memories they need for a given application rapidly and reliably. While not a panacea, it empowers us with new techniques to overcome the processor-memory gap and further unlock SoC performance.
About the author:
Sundar Iyer is co-founder and CTO at Memoir Systems, a start-up specializing in Semiconductor Intellectual Property (SIP) for algorithmic memories. Previously, Iyer was CTO and co-founder of Nemo (“Network Memory”) Systems, acquired by Cisco Systems in ’05. Iyer was a founding member at SwitchOn Networks (acquired by PMC-Sierra in ‘00), where he developed algorithms for associative memory and deep packet classification. In 2008, Iyer was awarded the MIT technology review (TR35) young innovator award for his work on network memory. He received his Ph.D. in Computer Science from Stanford University in 2008. Sundar can be reached at firstname.lastname@example.org
Did you find this article of interest? Then visit the Memory Designline
where we update daily with design, technology, product, and news
articles tailored to fit your world. Too busy to go every day? Sign up
for our newsletter to get the week's best items delivered to your inbox.
Just click here
and choose the "Manage Newsletters" tab.