This could be a hugely important product for Micron. If they can execute smartly, every cellphone and tablet will have at least one Automata Processor for image and speech recognition. Eventually every robot will have dozens.
From our related work we learned that building logic on DRAM processes is really really cheap. The AP should have a cost of around $2.
A number of WEB commenters have mentioned Venray's CPUs on DRAM. We are doing Big Data analitics for really large datasets. Despite some headlines, this is not what Micron is doing.
The Automata Processor is really first rate innovation. We applaud their efforts.
I always wanted a super computer just so I wouldn't have to wait so long to get what I wanted done. The problem with today's computers is that they spend too much bandwidth on things the user doesn't care about. It is a software problem for me. I know that there are many things that need a super computer, but they should also concentrate on the task and not on all the background things that computers do that are not useful.
At the SC13 event today, Intel said it will use memory in package for its Knights Landing, the next version of Xeon Phi. It it not saying which technique or how much, but it will support multiple programming models.
Part of the reason they are goign into this are memristors. They are going to anounce a memristor chip and memristors are supposed to fit very well mixing memory and logic. So Mircon wants an early start.
As was mentioned, this idea is old. Logic-in-memory architectures, e.g. PEPE, were around in the 1970s, the idea of putting processors on DRAM chips as well. The practical issues include the few metal layers and DRAM transistors optimized for yield, not speed, and the resulting SIMD-style architectures being difficult to program for most applications. I would vote for TSV 3-D packaging being an easier way to go. Take one of Micron's forthcoming DRAM stacks and stick it on a processor array chip.
People have been working on parallel computing for a while...with not that much to show for it despite using multiple cores in many processing products...any indication why the tsunami you are refering to will happen? Kris
Software will be a key enabler, you are correct. Lots of work has been done on creating algorithms that take advantage of massive numbers of processors and now with multiple serious entrants to this segment expect significant software advances too. Could be that the next wave of programmable logic coming at us is actually a massivly parallel tsunami...
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.