This is excelent and a great bang for the buck IMHO. This is whether you are a believer in this kind of multicore approach or not. At the very least you can see the board as a Zync-7000 development board as well, which the cheapest I could find was around 300 bucks (albeit a stronger sibbling of this FPGA, SoC, whatever...). As a (big) bonus you have this nice parallel core (the Epiphany)that you can play with, and who knows what kind of applications can be devised that can make a very good use of it. The sky (imagination) is the limit! :-)
The first WANT-NOW app for this beast should ofcourse be a FPGA sim,synthesis and routing tool!
(Does anyone work to do that with CUDA yet?)
Whoever comes first, let me know and I'll throw my money at you! :)
Max, Thank you for the really kind article! Just want to clarify that I really only designed the first chip myself. The last three chips were a complete team effort, with Roman Trogan being in charge of chip design and Oleg Raikhman in charge of verification and programming tools integration. I supported them from time to time, but spent most of my time failing at fundraising, selling, and marketing..
Agreed, bandwidth CAN be a killer, but there are plenty of applications that require a massive amount of processing per byte. Here are some of the applications we think the Parallella would be great at:
finger print matching
content based image retrival
optical character recognition
automated optical inspection
number plate recognition
synthetic aperture radar
smart stream compression
large focal array sensor imaging
Complete list here:
Someone just emailed me to say: "If you consider 16 or 64 cores a SuperComputer then what is this one with 144 that is shipping now? http://www.greenarraychips.com
There is more to this than just core count, like interconnections. Can we make a 4D-HyperCube like we can with the XMOS (decedents of Imos Transputers)? http://www.xmos.com/resources/xkits?category=XK-XMP-64+Development+Board
I replied "I think the main point here is that a lot of today's really compute-intensive tasks require floating point capability -- to the best of my knowledge, products like Green Arrays and XMOS don't support floating-point."
Its very interesting, but i'm skeptical of the usefulness. The thing is that the cores are going to be starved for data. Maybe you can pick a few specific applications where this may not be the case, but in general you dont just process the same data over and over. If you look at the architecture you have coherency problems and bandwidth problems. If you were to analyze many applications, many of the cores would just be idle waiting for data input or output. Also the program(s) running on the cores need to be relatively small. I mean all cores can see what the others are doing, but how do you manage that? Hence the result really expensive super computers ...
The Other Tesla David Blaza5 comments I find myself going to Kickstarter and Indiegogo on a regular basis these days because they have become real innovation marketplaces. As far as I'm concerned, this is where a lot of cool ...