"MT4C1024 one-mebibit dynamic random access memory (DRAM)"
And to explain the title, a coworker and I had a 20 minute argument with a hardware manager about the execution of our program. We assured him the program would execute within the 77 millisecond window between interrupts and he argued that our program could not be that fast.
At the end of it he asked, "What are them thar milli-seconds?"
Cool! I didn't know that. I've just come to hate the marketers who thought to inflate spec numbers by using decimal rather than binary for their drive size measurement. You don't see them doing tht for RAM, just hard drives, from what I've seen. I'm always having to explain to friends and relatives why their hard drive did not format to the size they expected.
A key parameter that isn't discussed is the likely life expectancy of the stored data. Clearly #1 (stone) wins hand down. It is still accessible thousands of years later without special instrumentation. The next step (which wasn't shown) is paper. It also can be read thousands of years later without special instrumentation. After that the story is much less encouraging. By my best guess, 5 of the memory technologies fail in archival storage because they lose memory without power (#3, 4, 5, 12, 14). Five more are no longer accessible because the technologies have been retired (#2, 6, 7, 8, 13). That leaves 4 to consider: the hard disk drive (9), the CD (10), and the flash (11) USB thumb drive (15). Today they seem safe - but it wasn't long ago that IBM cards, magnetic tapes, 8" floppies, 5.25" floppies, 3.5" floppies were considered universibly accessible. Most modern home computers can access none of them without special adapters. Powering and interfacing to old disk drives can be a challenge. From our present vantage point CD's seem universal but there are reports that the data may have a life expectance of less than 10 years. Also the availability of readers seems to be waning in newer devices. USB thumbdrives are certainly the current fad - but I have no doubt that the next technical wave will sweep them away as well. For the time being for archival storage, our two choices are paper (I'll concede the low data density of stone makes it impractical) and aggressive backup on electronic media that is constantly rolling the data (and the reading tools)to current storage technologies and equipment. A high overhead process. Our ancestors did much better with archival storage than we're managing.
I hadn't thought about that aspect. That is very interesting how that works. Maybe it's not just memory - things are getting easier and easier to manufactuure, but in some cases aren't built for nearly as long a life.
My sense is that other than cars, almost nothing is improving in longevity and maintainability. Cars certainly last much longer than they used to and require much less maintenance. Almost all repairs, however, require a skilled mechanic. Even replacing a headlight on my wife's VW requires removing the windshield washer and annoying little repairs to the electrical components can be exceptionally expensive. Electronic devices like televisions have longer life expectancies than their tubed ancestors - but repairs are typically cost prohibitive.
"A key parameter that isn't discussed is the likely life expectancy of the stored data. Clearly #1 (stone) wins hand down."
In addition to the life expentancy of the raw data, knowledge of the format is also important. This requires some continuity of knowledge. Even older forms of storage encounter a similar issue; writings in dead languages can be very difficult to understand.
(As you pointed out, even data that is physically readable may be uneconomical to read because of the cost of the reading device.)
One more item to ad to your list: magnetic core memories. Back in the lates 60's, computers had magentic corememories, the cores strung amid an array of x-y wires that controlled its activation and reading response. similar designs were used in military aircraft at the time; all done on 1k and 2k of memory.
If you are going back to writing on stone (or clay tablets), you cannot ignore Delay line memory, including Mercury delay lines.
Delay lines were developed to store radar blips so that screens displayed only new, moving blips. In computers, delay lines converted data bits (ones and zeros) into sound waves, transmitted them acoustically, then converted them back into bits. They circulated forever until changed by the computer.
As well as complicating the architecture and programming of a computer using Mercury Acoustic Delay Lines, the mercury filled tubes were non standard components that had to be handled very carefully; they were bulky and had to be kept under close temperature control. Some feel of the mechanics of using them can be gained from looking at the description of the building of EDVAC, about 40% down the page (a chapter from a monograph on the history of Electronic Computers within the Ordnance Corps, written in 1961). EDVAC was the official U.S. successor to the ENIAC, with the design finalised in 1947; but it didn't start to work usably till late 1951 -- "by early 1952 it was averaging 15-20 hours of useful time per week".
Delay line memory was far less expensive and far more reliable per bit than flip-flops made from tubes, and yet far faster than a latching relay. It was used right into the late 1960s, notably on British commercial machines like the LEO I, Highgate Wood Telephone Exchange, and various Ferranti machines.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.