Thanks for writing about us again! It's been a looong journey since 2010. Patience has never been a strength of mine either, not sure if it should be?:-) Although this milestone was important, the next one is the one that really matters: shipping 6,300 final product boards. As you know when it comes to HW, talk is cheap. What matters is shipping a great product that works reliably. We are getting close and are working as hard as we possibly can to make it happen as soon as possible.
The bitcoin algorithm doesn't map well onto the Epiphany archtiecture and there are already a lot of a few really fast bitcoin mining ASICs out there. However, Parallella should do well on the next coin mining craze called "litecoin". @solardiz at Openwall (the good guys behind John the Ripper password cracker) are looking into it.
Congratulations Andreas and team!...A super computer running flat out and just consuming 5watts!! That is amazing!! Will it be available for sale for individuals outside US (I am from India) in that attractive price? :) I would be very much interested then.
Being a low power, low cost super computer (but little in size :)), I would personally try to explore applications starting with home automation: probably several homes in the neighborhood (why not starting a service :))? Probably allowing the users to monitor and control staffs at their home using their smart phones? That's my little tought as of now...
But would it be underestimating its capabilities? May be it can do more...like running complex algorithms where parallel computing is an advantage....e.g. complex flow algorithms in the O&G indistry?
Andreas- congratulations on this milestone and here's to continued success. Looking forward to learning how long it takes to ship the 6,300. From the enthusiasm this has generated, I'd venture to guess not that long. Please keep us posted.
Dylan- Thanks! Shipping 6,300 boards is the one that really matters to us. The expectations of 5,000 KS backers has been a big weight to carry for 9 months now. Fortunately for us they have been incredibly patient and understanding. We'll definitely keep you posted. If you don't hear from us something i wrong:-)
I'd like to see stacks of these implement a distributed chess playing algorithm to create the best chess playing computer in the world. It would crush every existing chess computer out ther. After we do that we would move on to predicting the weather...
Chess algorithms are usually tuned to the targeted hardware and use various 'tricks' (like using 64-bit board representations that can be managed and operated on easily by 64-bit procesors). Having ranks of processors could allow new algorithms to emerge that might identify ways to use very massive processor banks for a variety of algorithms: physical systems that are currently difficult to model (EM fields, turbulent fluid flow, large molecule interactions, encryption/decryption, etc.)
On practical advantage is a better understanding of how to break algorithms into pieces that can be efficiently executed on a large number of processors. Chess computers uses algorithms that are common to other difficult problems (economics, scheduling, FPGA routing, etc) so this could help identify some new algorithmic approaches to solving other problems.
1. Cheapest Zynq platform available -- Dual-core ARM Cortex-A9 plus FPGA fabric. Even if you haven't had your Epiphany yet, Parallella lets you play with Zynq for US$99 instead of a US$395 ZedBoard.
2. Open-source hardware GPU. The open source software community has been locked out of the massive parallelism allowed by GPUs because most of their architectures are closed and you can't write your own code for them. This is one of the most frequent compaints about Raspberry Pi. With Parallella you'll be able to run GPU functions on Epiphany and use the FPGA to display the results (at least I think the logic paths are there to do this).
3. Parallel programing research has been held back for decades because there are so few parallel computers available to play with. Parallella is a game-changer here, and will allow parallel languages and compilation techniques to thrive. Look what happened when micro-computers let experimenters have their own computers for hundreds instead of tens of thousands of dollars.
While it's certainly interesting to speculate on the potential applications, I'm reminded of similar questions about the neverending increase in disk storage capacity. It seems like a case of "build it and they [applications] will come."
I suppose you're right, Rich. And at this price, what's not to like? I wonder if there are plans to consumerize this... Seems like it would be a catchy sales pitch: Why settle for an ordinary computer when you could have a supercomputer for less?
The Parallella actually has a ~10Gb/s memory mapped low-latency link (through the "PEC" connector) that can be used to to construct some interesting large scale topologies. See the specs here.http://www.parallella.org/board
Congratulations, Andreas, on shipping your product and nearing your large milestones. I am curious about your eLink interface between the Zynq and Epiphany. Since you designed the Epiphany from scratch, is it able to make use of the high-speed transceivers offered by the Zynq? We commonly interface processors to FPGAs in my business, and it is always a challenge to find processors that have high-speed buses that we can use to communicate with our FPGAs.
mlloyd - A few years back we decided to stay away from the high speed SERDES and use a source synchronous LVDS interface instead. This way we could attach to low cost as well as high end FPGAs. The interface does use a fair amount of pins (8 data lanes) but can provide up to 16Gb/s total bandwidth with a 500MHz clock. This turned out to be a good choice because the low cost zynq 7010 and 7020 don't currently support high speed serdes.
Source synchronous LVDS -- that would be great for a robust, versitile interface. 8 data lanes is not too many pins compared to the much lower bandwidth parallel interfaces we have used to communicate with processors currently available on the market. Thanks for the information! I hope to have the opportunity to use the Parallella.
I wonder how was the $100 cost figure obtained? In a real world set up, development time, maintenance, and support costs would all be added to get the market cost. Is that the case with the $100 figure?
As for performance, the figure given is peak performance, a real benchmark performance could/would be a tiny/small fraction of that. Have any benchmarks been conducted on this platform?
Finally, what is the killer app that would make it likely to be continuously developed for state-of-the-art fabrication nodes?
Typical microprocessors are built to allow high elvels of branching in the instruction set, including "multi-processing-in-time". Parallel processing arrays are best used as data-processors in a data-flow arrangement - processing data as it arrives, in real time.
Peak performance issues come from 1): branching the processing code; 2): I/O bottlenecks. For a "data-flow" machine, the code does (or should) NOT branch: each processor performs the same calculations "ad eternity". (The results of any given processor might be ignored, ie: scaled to zero, but the calculations are constantly done). If the parallel-processing-machine is I/O limited, then the processors will starve, just like any other processor (ie: bad design).
In other words, the peak performance mentioned would NOT be a "tiny/small fraction of that", or the system either is 1): overkill; 2): not implemented well; or 3): not suited for a parallel process. For data-rate processing systems, a well engineered parallel array would be humming along, at full speed and at, or near the peak processing rate possible. [One would likely NOT run a Ferrari on a school bus route, or mail route (with lots of stops and turns); that Ferrari would be best suited to screaming along the Autobahn, pedal "to the metal".]
"Killer app?: Consider vision/speech/radar/sonar/neural ... etc ... systems where large amounts of data is/are contantly arriving, being processed, and sent along the processing pathways.
=== OR ===
How a real-time gaming system tied to a live (American) football broadcast/Madden-game (coaches view), linked to your kinnect, where you get the QB's (or running back's) view, and have have to "make the play"? And at NFL or college level, real-time speeds. [Use the kinnect for player movement, and even judging the throw itself]. Or, if not for the masses, how about making that system for the players? Baseball from the batter's view might work well, too! There is lots of tape of great pitchers - Sandy Koufax --- Randy Johnson, etc ... could you hit their stuff?
Great product! Just need to "find" the money to buy one, if not the next 6000 (or so!).
"if only things were that simple" ... what things?
A data-flow system is that simple.
If you want to run large programs that are not data-flow, then you have other issues, and an array of coupled processors is not going to be an appropriate solution for that problem set. For the "typical" complex non-data-flow program scheme, one would need to do the typical threaded software, which is NOT simple. On that I think we all agree. Code that is highly branched, and/or that require intra-process communications, have synchronization issues, and that leads to processor stalls ... unless one manages to scale the various threads and execution paths to all cycle together. Not a task for the feint-hearted. Code that does a lot of task-switching is not good for array'd processors, either.
But physics and engineering matrix decomposition (and etc) type programs are good array'ed processor problems. Not every tool is appropriate to every problem. Array'd processor sets are good at "high" data-rate, repetitive processing.
For task-switching code, a processor would be better suited to have larger register files, and caches. For huge, unwieldy, non-threaded code to parallel ... well, there is little hope for that, save maybe for massive re-writing? In the engineering world that would probably include both Verilog/VHDL themselves, as well as the code that they generate, for instance.
The fun of engineering, is to know which tool to use for what problems ... and/or making a new tool to do something that otherwise was not tractable. This board is decently fast, cheap, and capable for data-processing problems. So, for that application set, things are pretty simple, given this new solution.
For what it is worth, I designed a similar arrayed processor chip family, and studied (in some detail) what applications to which it provided a good solution space, and where it was not going to be useful. The biggest hurdle, is getting the software cycle synch'd across all of the processors, and that issue was addressed through a software simulator. This required an assembly code level of attention to detail, but once coded, each "program element" could be stitched in from higher levels of simulation, such as the Ptolomy program out of Berkeley. Of the various types of data-flow programs, FFT's presented the worst issues. It was NOT something anyone would ever want to use to run (say) Linux. Nor would it be worthwhile for small SPICE simulations. But, work through a large SPICE sim, or other large data-set data-flow program and it was faster and lower power than uP's.
@jb0070, I asked about the killer app and you enumerated a list: radar, image, audio etc. which many other technologies are going for. To make it commercially long term, you need to sort out issues to do with programmer productivity, maintainability, and cost, and I can't see why this particular technology will succeed where others have flopped.
NASA's Orion Flight Software Production Systems Manager Darrel G. Raines joins Planet Analog Editor Steve Taranovich and Embedded.com Editor Max Maxfield to talk about embedded flight software used in Orion Spacecraft, part of NASA's Mars mission. Live radio show and live chat. Get your questions ready.
Brought to you by