Now that Micron is enjoying higher stock price ( quadrupled this year ) after they were able to affect the supply - demand situation for DRAMs in their favor by acquiring Elpida at bargain basement price, its time for them to at last diversify into higher margin products like processors.
Would be interested to find out process integration issues of mixing CPU and SRAM, DRAM on the same die, or they might be trying out their HMC technology for tight coupling of CPU and SRAM / DRAM on separate dice.
Interesting thoughts @chipmonk...I wonder whether Micron's move is part of a major transformation in the industry...certain types of data processing seems to be lending itself to graphical chips not traditional processors...these new processor will start eating away from revenue by Intel etc...how significant is that threat? Kris
As Chipmonk noted, Micron has long ttried to get a foot into the logic market where ASPs and margins are much bigger.
This is its loatest effort, but frankly I have seen many massively multicore architectures that were all so hard to program they never went anywhere. I have yet to see one succeed and as far as I can tell this is still very much an academic and lab experiment.
Software will be a key enabler, you are correct. Lots of work has been done on creating algorithms that take advantage of massive numbers of processors and now with multiple serious entrants to this segment expect significant software advances too. Could be that the next wave of programmable logic coming at us is actually a massivly parallel tsunami...
People have been working on parallel computing for a while...with not that much to show for it despite using multiple cores in many processing products...any indication why the tsunami you are refering to will happen? Kris
@chipmonk0 - Micron bought the remaining 38.6 percent of Ovonyx phase change memory tech from bankrupt ECD last August for just $12 million bit [dot] ly/1aNXcrl and they signed a technology agreement with Intel "regarding certain emerging memory technologies" some time before that. So it is likely that this "non Von Neumann" is ECD's "Cognitive Computer". Just google "Ovshinsky Cognitive Computer" for the details on that. IBM has been working for some time on the tech so I'm curious to see what value was realized in that transaction. The technology is brilliant. Iif attainable, it can recognize a picture of Joe from logistics, a golden retriever or a laffer curve - or video for that matter. Limitless. Number Johnny Five sort of stuff.
Don't forget that these systems require massive amounts of high speed memory to keep the computing engine busy. This segment is a win-win for Micron where they can sell the processors AND the high-speed memory chips (and if the processor uses a significant amount of memory on-chip too, maybe it's a win-win-win)!
As was mentioned, this idea is old. Logic-in-memory architectures, e.g. PEPE, were around in the 1970s, the idea of putting processors on DRAM chips as well. The practical issues include the few metal layers and DRAM transistors optimized for yield, not speed, and the resulting SIMD-style architectures being difficult to program for most applications. I would vote for TSV 3-D packaging being an easier way to go. Take one of Micron's forthcoming DRAM stacks and stick it on a processor array chip.
Part of the reason they are goign into this are memristors. They are going to anounce a memristor chip and memristors are supposed to fit very well mixing memory and logic. So Mircon wants an early start.
At the SC13 event today, Intel said it will use memory in package for its Knights Landing, the next version of Xeon Phi. It it not saying which technique or how much, but it will support multiple programming models.
I always wanted a super computer just so I wouldn't have to wait so long to get what I wanted done. The problem with today's computers is that they spend too much bandwidth on things the user doesn't care about. It is a software problem for me. I know that there are many things that need a super computer, but they should also concentrate on the task and not on all the background things that computers do that are not useful.
This could be a hugely important product for Micron. If they can execute smartly, every cellphone and tablet will have at least one Automata Processor for image and speech recognition. Eventually every robot will have dozens.
From our related work we learned that building logic on DRAM processes is really really cheap. The AP should have a cost of around $2.
A number of WEB commenters have mentioned Venray's CPUs on DRAM. We are doing Big Data analitics for really large datasets. Despite some headlines, this is not what Micron is doing.
The Automata Processor is really first rate innovation. We applaud their efforts.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.