"This sounds like the magnetic bubble memory of the 70's. A good idea that never scaled out to be useful." Jack Peacock
I was thinking the same thing until I spoke with IBM who claims that the Racetrack Memory is still an active projects with papers published yearly about progress being made. Of course, only time will tell :)
The similarities are far too strong to ignore. Although the details are quite sparse, I would hazard a guess that these would have to share some of the properties that doomed bubble memory (which i actually played with quite a bit back in the '70s). They include: serial nature, which then leads to access time issues, and apparently destructive bit read, necessitating read-modify-write on EACH BIT during readout.
Looks more like a DRAM in the sense that reading is destructive and you need to rewrite after read, but because read is serial you effectively need to shift the whole memory out to get to the one word you actually want and rotate the read values back in at the top again. This might lead to very long access times unless each line is very short (eg a few bits long) in which case you need an awful lot of them which might cost a lot of power.
If the read/write circuitry is large then it would not be possible to have that many copies which would force you into having long access times, etc.
BUT if the race track is really a loop, eg two lanes one running left to right the next running right to left, then you might be able to shift it all around without rewriting, you just need to keep track of how many times you need to rotate to reach the value you want. But this still is likely to lead to long access times and a lot of localised caching ie you red out a very wide word and hope that you can work in that for a long time before fetching the next, and also hope that the next wanted value is adjacent.
Random access is not going to be good - but as this is a speculated model for replacing disks rather than RAM the serial access model is actually what we expect.
So does this yield a smaller magnetic domain than you get with a conventional harddrive? As it will need optical patterning that seems unlikely - bit sizes will be comparable to FLASH (ie a few line widths). So it may be a competitor for using FLASH in SSD if the access time, power consumption etc pan out, the gamble being that the bits of the FALSH architecture that enable random access are unnecessary overhead for the SSD application.
If you can get away from optically patterning the memory wires, however, then the tradeoff changes radically. If by self-assembly you can get the separate wires and the notch pattern then the domains could shrink to much smaller than a FLASH memory bit potentially and that is more credible than self-assembling the relatively complex internals of a FLASH cell.
I suspect that Max is trying to develop memories with a new kind of bit--bacon bits, which would allow him to snack on unused memory. :) "Uh yeah, that was a 1Meg memory but actually only 732Meg is available. A few of the nibbles are dedicated."
Max migjht say "That's a bit too close to home." In his presence bacon-based memory is extremely volatile to be sure! No analog meats (or meat analogs) for him. Digital meat could be a different story.
As I recall, to get this arrangement, there had to be three transistors in the cell, one for handling the read MTJ, one for the write head and finally one (I/O grade) to push the track along. So it's a huge cell size.
I remember reading that bubble memory had fast seek times (no rotational delay/head seek time), but data transfer was equal to the hard drives of the time.
I still get a laugh when a friend was confidently predicting that bubble memory was going to replace main memory back in the 80's.
I got a bigger laugh when a buzzword spouting VP of Technology toured our programming offices in the early 90's and was making the same claim, but with Flash memory. I guess he thought "flash" meant fast.
While NVM has found it's way into the secondary storage hierarchy, I was always uncomfortable when folks would say that traditionally architected NVM memory would displace rotational media. Packing bits back to back vs. requiring transistors for storage and decoding will always have the potential for higher density. As such, I used to flippantly assert "rotational media always wins". For awhile, I was worried as NAND densities increased and then I saw the Racetrack Memory annoucement. Whew! A new type of rotational media. What else would you expect from the inventors of the HDD.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.