Distributed computing implies several autonomous computational entities communicating through message passing (wikipedia). By having the memristor structures dense enough, some tasks could well be completed running standalone with precoded adaptability on specific functions (like the rendering mention above) with minimal message passing and reduced interconnect performance degradation. I do see this as an enhancement capacity-wise to flash, making possible infinitely distributed processors with (again some precoded) localized decision making and adaptability.
Well, if resistance is going be the new indicator of logic state, I have to wonder how long it will take to measure this. Maybe longer for the high resistance state due to the time constant formed with the stray capacitance? It implies that a known current is passed through the element and the voltage is measured or a known voltage is placed across the element and the current is measured. How fast can this be done?
This is the beginning of the end for Intel's domination of the computing world. Algorithms that are hard for the current paradigm will become natural for this new paradigm. For example, image recognition, can become a big leap in user interface (UI). Just imagine an email program that can understand your emotions upon reading certain emails and direct them to the junk folder automatically...
Your sentiments ring oh-so-true. 20 years ago when I was in engineering school at University of Michigan (Go Wolverines!) one of my Master's projects was to automatically partition algorithms for execution on multiple microprocessors (in those days each CPU was on a separate chip--no multi-cores yet). We thought the problem would take just a few years to solve, but today there is still no way to automatically partition algorithms for parallel execution. And that problem is easy compared to the new paradigm that HP is proposing!
Unfortunately, headlines can never tell the whole story. This one asks the question of whether configurable memristors could replace the CPU, but the answer, of course, is more complex than just yes or no. If HP is correct, then its memristive device architecture will be able to replace some CPU functions--such as rendering--by massaging the data in-place, rather than having to shuffle all the data through a CPU and back to memory.
I think there is a problem with what this article is suggesting.
You see, the main performance bottleneck in a processor is NOT in the transistors themselves, but rather in the interconnect (data-paths, wires, or the metal layers). It's the charging and discharging of these data-paths that determines the rise/fall times of signals.
A processor designed with memristors, so as to enable dynamic reconfiguration, might actually degrade overall performance as it will not be able to have specialized interconnectwithin latency critical units. And if it were to employ specialized interconnect, that would defeat the purpose of employing memristors.
HP is probably regretting calling this a memristor, as it is not clear that it is really the fourth fundamental element. It appears to be a switch between two nanowires that is closed by some voltage relationship and opened by another. Cool, yes; Game-changing, maybe; Memristor, probably not.
The comment above beat me to it: reconfigurable computing, FPGAs, etc have been pitching at the idea of a fundamentally new model of computation for decades. FPGA computing has shown order-of-magnitude performance improvements, but that is not enough to change things. People need simple computing abstractions. Until that emerges, the technology is irrelevant.
It's not a big invention for such an concept in Programmable Logic Device field. Also the programming model is the biggest question before it can be accepted widely. BTW, Tabula Spacetime technology is very similar to this memristor.
Am I missing something here, or is this the most earth shattering development in electronics since the invention of the transistor in 1947? I would think anything that forces a rewrite of first-year engineering textbooks would be worthy of a bit more attention than a casual article in a trade publication. An anti-resistor? Really? Are we now going from R-L-C circuits to R-L-C-M circuits? That's a pretty basic change to classic AC circuit theory!
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...