Embedded Systems Conference
Breaking News
Comments
Newest First | Oldest First | Threaded View
Page 1 / 5   >   >>
KB3001
User Rank
Author
Re: Peak performance and real cost
KB3001   9/6/2013 5:39:51 AM
NO RATINGS
@jb0070, I asked about the killer app and you enumerated a list: radar, image, audio etc.  which many other technologies are going for. To make it commercially long term, you need to sort out issues to do with programmer productivity, maintainability, and cost, and I can't see why this particular technology will succeed where others have flopped.  

jb0070
User Rank
Author
Re: Peak performance and real cost
jb0070   9/6/2013 5:29:09 AM
NO RATINGS
"if only things were that simple" ... what things?

A data-flow system is that simple.

If you want to run large programs that are not data-flow, then you have other issues, and an array of coupled processors is not going to be an appropriate solution for that problem set. For the "typical" complex non-data-flow program scheme, one would need to do the typical threaded software, which is NOT simple. On that I think we all agree. Code that is highly branched, and/or that require intra-process communications, have synchronization issues, and that leads to processor stalls ... unless one manages to scale the various threads and execution paths to all cycle together. Not a task for the feint-hearted. Code that does a lot of task-switching is not good for array'd processors, either.

But physics and engineering matrix decomposition (and etc) type programs are good array'ed processor problems. Not every tool is appropriate to every problem. Array'd processor sets are good at "high" data-rate, repetitive processing.

For task-switching code, a processor would be better suited to have larger register files, and caches. For huge, unwieldy, non-threaded code to parallel ... well, there is little hope for that, save maybe for massive re-writing? In the engineering world that would probably include both Verilog/VHDL themselves, as well as the code that they generate, for instance.

The fun of engineering, is to know which tool to use for what problems ... and/or making a new tool to do something that otherwise was not tractable. This board is decently fast, cheap, and capable for data-processing problems. So, for that application set, things are pretty simple, given this new solution.

For what it is worth, I designed a similar arrayed processor chip family, and studied (in some detail) what applications to which it provided a good solution space, and where it was not going to be useful. The biggest hurdle, is getting the software cycle synch'd across all of the processors, and that issue was addressed through a software simulator. This required an assembly code level of attention to detail, but once coded, each "program element" could be stitched in from higher levels of simulation, such as the Ptolomy program out of Berkeley. Of the various types of data-flow programs, FFT's presented the worst issues. It was NOT something anyone would ever want to use to run (say) Linux. Nor would it be worthwhile for small SPICE simulations. But, work through a large SPICE sim, or other large data-set data-flow program and it was faster and lower power than uP's.

 

KB3001
User Rank
Author
Re: Peak performance and real cost
KB3001   9/6/2013 4:41:14 AM
NO RATINGS
@jb0070, if only things were that simple....
 


jb0070
User Rank
Author
Re: Peak performance and real cost
jb0070   9/5/2013 6:54:58 PM
NO RATINGS
Typical microprocessors are built to allow high elvels of branching in the instruction set, including "multi-processing-in-time". Parallel processing arrays are best used as data-processors in a data-flow arrangement - processing data as it arrives, in real time.

Peak performance issues come from 1): branching the processing code; 2): I/O bottlenecks. For a "data-flow" machine, the code does (or should) NOT branch: each processor performs the same calculations "ad eternity". (The results of any given processor might be ignored, ie: scaled to zero, but the calculations are constantly done). If the parallel-processing-machine is I/O limited, then the processors will starve, just like any other processor (ie: bad design).

In other words, the peak performance mentioned would NOT be a "tiny/small fraction of that", or the system either is 1): overkill; 2): not implemented well; or 3): not suited for a parallel process. For data-rate processing systems, a well engineered parallel array would be humming along, at full speed and at, or near the peak processing rate possible. [One would likely NOT run a Ferrari on a school bus route, or mail route (with lots of stops and turns); that Ferrari would be best suited to screaming along the Autobahn, pedal "to the metal".]

"Killer app?: Consider vision/speech/radar/sonar/neural ... etc ... systems where large amounts of data is/are contantly arriving, being processed, and sent along the processing pathways.

=== OR ===

How a real-time gaming system tied to a live (American) football broadcast/Madden-game (coaches view), linked to your kinnect, where you get the QB's (or running back's) view, and have have to "make the play"? And at NFL or college level, real-time speeds. [Use the kinnect for player movement, and even judging the throw itself]. Or, if not for the masses, how about making that system for the players? Baseball from the batter's view might work well, too! There is lots of tape of great pitchers - Sandy Koufax --- Randy Johnson, etc ... could you hit their stuff?

Great product! Just need to "find" the money to buy one, if not the next 6000 (or so!).

KB3001
User Rank
Author
Re: Peak performance and real cost
KB3001   7/31/2013 2:18:08 PM
NO RATINGS
Thanks @adaptiva. I will look at these in detail later, but have you calculated the performance per dollar and performance per watt for such benchmarks and compared with competing platforms?

adapteva
User Rank
Author
Re: Peak performance and real cost
adapteva   7/31/2013 12:08:09 PM
NO RATINGS
@KB3001 The entry per board price is $99. We don't disclose our actual costs.

Here are a couple of the benchmarks that we have run:

http://www.adapteva.com/white-papers/benchmarking-the-raspberry-pi-vs-the-parallella/

http://www.adapteva.com/white-papers/more-evidence-that-the-epiphany-multicore-processor-is-a-proper-cpu/

In my opinion the "killer app" for the Parallella platform is ..."computing".

KB3001
User Rank
Author
Peak performance and real cost
KB3001   7/31/2013 11:52:41 AM
NO RATINGS
Hi Guys.

I wonder how was the $100 cost figure obtained? In a real world set up, development time, maintenance, and support costs would all be added to get the market cost. Is that the case with the $100 figure?

 

As for performance, the figure given is peak performance, a real benchmark performance could/would be a tiny/small fraction of that. Have any benchmarks been conducted on this platform?

 

Finally, what is the killer app that would make it likely to be continuously developed for state-of-the-art fabrication nodes?

 

Appreciate your thoughts on the above!

selinz
User Rank
Author
Re: Cheap Firepower
selinz   7/27/2013 5:58:28 PM
NO RATINGS
Here's an idea. Write an x86 interpreter and run Windows!

MajorTom
User Rank
Author
Re: Cheap Firepower
MajorTom   7/27/2013 3:09:02 PM
NO RATINGS
"What would people use this for?"


Hmmm, how about breaking 256-bit AES?


The NSA is planning on using supercomputers, but these would require less power.


Tom Murphy
User Rank
Author
Re: Cheap Firepower
Tom Murphy   7/25/2013 3:34:08 PM
NO RATINGS
I suppose you're right, Rich. And at this price, what's not to like?  I wonder if there are plans to consumerize this...   Seems like it would be a catchy sales pitch: Why settle for an ordinary computer when you could have a supercomputer for less?

Page 1 / 5   >   >>


Radio
NEXT UPCOMING BROADCAST
In conjunction with unveiling of EE Times’ Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. One of Silicon Valley's great contributions to the world has been the demonstration of how the application of entrepreneurship and venture capital to electronics and semiconductor hardware can create wealth with developments in semiconductors, displays, design automation, MEMS and across the breadth of hardware developments. But in recent years concerns have been raised that traditional venture capital has turned its back on hardware-related startups in favor of software and Internet applications and services. Panelists from incubators join Peter Clarke in debate.
Flash Poll
Top Comments of the Week
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
Special Video Section
The LTC®4015 is a complete synchronous buck controller/ ...
10:35
The LT®3042 is a high performance low dropout linear ...
Chwan-Jye Foo (C.J Foo), product marketing manager for ...
The LT®3752/LT3752-1 are current mode PWM controllers ...
LED lighting is an important feature in today’s and future ...
Active balancing of series connected battery stacks exists ...
After a four-year absence, Infineon returns to Mobile World ...
A laptop’s 65-watt adapter can be made 6 times smaller and ...
An industry network should have device and data security at ...
The LTC2975 is a four-channel PMBus Power System Manager ...
In this video, a new high speed CMOS output comparator ...
The LT8640 is a 42V, 5A synchronous step-down regulator ...
The LTC2000 high-speed DAC has low noise and excellent ...
How do you protect the load and ensure output continues to ...
General-purpose DACs have applications in instrumentation, ...
Linear Technology demonstrates its latest measurement ...
10:29
Demos from Maxim Integrated at Electronica 2014 show ...
Bosch CEO Stefan Finkbeiner shows off latest combo and ...
STMicroelectronics demoed this simple gesture control ...
Keysight shows you what signals lurk in real-time at 510MHz ...