While NVM has found it's way into the secondary storage hierarchy, I was always uncomfortable when folks would say that traditionally architected NVM memory would displace rotational media. Packing bits back to back vs. requiring transistors for storage and decoding will always have the potential for higher density. As such, I used to flippantly assert "rotational media always wins". For awhile, I was worried as NAND densities increased and then I saw the Racetrack Memory annoucement. Whew! A new type of rotational media. What else would you expect from the inventors of the HDD.
Max migjht say "That's a bit too close to home." In his presence bacon-based memory is extremely volatile to be sure! No analog meats (or meat analogs) for him. Digital meat could be a different story.
I suspect that Max is trying to develop memories with a new kind of bit--bacon bits, which would allow him to snack on unused memory. :) "Uh yeah, that was a 1Meg memory but actually only 732Meg is available. A few of the nibbles are dedicated."
I remember reading that bubble memory had fast seek times (no rotational delay/head seek time), but data transfer was equal to the hard drives of the time.
I still get a laugh when a friend was confidently predicting that bubble memory was going to replace main memory back in the 80's.
I got a bigger laugh when a buzzword spouting VP of Technology toured our programming offices in the early 90's and was making the same claim, but with Flash memory. I guess he thought "flash" meant fast.
As I recall, to get this arrangement, there had to be three transistors in the cell, one for handling the read MTJ, one for the write head and finally one (I/O grade) to push the track along. So it's a huge cell size.
Looks more like a DRAM in the sense that reading is destructive and you need to rewrite after read, but because read is serial you effectively need to shift the whole memory out to get to the one word you actually want and rotate the read values back in at the top again. This might lead to very long access times unless each line is very short (eg a few bits long) in which case you need an awful lot of them which might cost a lot of power.
If the read/write circuitry is large then it would not be possible to have that many copies which would force you into having long access times, etc.
BUT if the race track is really a loop, eg two lanes one running left to right the next running right to left, then you might be able to shift it all around without rewriting, you just need to keep track of how many times you need to rotate to reach the value you want. But this still is likely to lead to long access times and a lot of localised caching ie you red out a very wide word and hope that you can work in that for a long time before fetching the next, and also hope that the next wanted value is adjacent.
Random access is not going to be good - but as this is a speculated model for replacing disks rather than RAM the serial access model is actually what we expect.
So does this yield a smaller magnetic domain than you get with a conventional harddrive? As it will need optical patterning that seems unlikely - bit sizes will be comparable to FLASH (ie a few line widths). So it may be a competitor for using FLASH in SSD if the access time, power consumption etc pan out, the gamble being that the bits of the FALSH architecture that enable random access are unnecessary overhead for the SSD application.
If you can get away from optically patterning the memory wires, however, then the tradeoff changes radically. If by self-assembly you can get the separate wires and the notch pattern then the domains could shrink to much smaller than a FLASH memory bit potentially and that is more credible than self-assembling the relatively complex internals of a FLASH cell.
My Mom the Radio Star Max MaxfieldPost a comment I've said it before and I'll say it again -- it's a funny old world when you come to think about it. Last Friday lunchtime, for example, I received an email from Tim Levell, the editor for ...
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...