The tool usually used for parallel programming these days is MPI, and before that, PVM. As Rick points out, these are cumbersome to use. Cray developed a much simpler system for use in Fortran, called coarrays, starting in about 1992. It's been part of the Fortran standard since 2008. Its simplicity has served as the model for UPC, X10 and Chapel, which aren't part of any standard. The coarray features of Fortran are described unofficially in ftp://ftp.nag.co.uk/sc22wg5/N1801-N1850/N1824.pdf. The draft of the Fortran 2008 standard that was ballotted by ISO is at http://j3-fortran.org/doc/standing/links/007.pdf.
In its own small way, startup Adapteva is joining giants such as Intel, Microsoft, Nvidia, AMD and others have been funding university work on parallel programming for many years.
Multicore processors need new parallel programming tools if they are to be used effectively. Despite years of research into massively parallel supercomputers, simple tools never emerged. One more shoulder, however small, pushing in this direction is always welcome.
I checked out the link, and I will be interested in seeing how his work progresses. I took a class at UCSD Extension a few years ago from Bart Kosko, one of the bright you guys in the connectionist school of thought in neural networks. I didn't get very far at the time, and eventually I set it aside, but I have come back to it from time to time. This is a good enough platform for this that I am tempted to drag out my old textbook and try some things out. I'll let you know if I come up with anything.
@LarryM99: If you do use the Parallella for these applications, I'd love to hear how you get on -- especially the neural network one -- are you familiar with the work Robin Findley is doing with Neural Networks and FPGAs? (Click Here for more details)
I am keeping this in mind for two different applications. It should be a natural platform for neural network experimentation. I played around with that a while back and I mioght try it again on this platform. The small size of the memory for individual computing elements and the communication capability should work very well for that.
The other thing I might try is Software Defined Radio. Signal processing in the digital domain is a large part of what that is all about, and this platform could potentially do some very interesting things in terms of frequency adaptation and signal extraction.
Both of these applications have people working them in the forums on the site. It's comforting to have some company when you go wandering in the uncharted wilderness...
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.