Design Con 2015
Breaking News
Design How-To

Reduce parallel programming pain with dataflow languages

6/27/2010 09:00 PM EDT
12 comments
NO RATINGS
Page 1 / 4 Next >
More Related Links
View Comments: Newest First | Oldest First | Threaded View
<<   <   Page 2 / 2
KarlS
User Rank
Rookie
re: Reduce parallel programming pain with dataflow languages
KarlS   6/28/2010 5:36:28 PM
NO RATINGS
In figure 3, what is the advantage of pipelining over simply having each core run on separate data streams? DKC: What makes RTL synchronous? HDL blocks are typically edge triggered and typically use a clock edge so that only 1 signal is involved, otherwise bad things called glitches occur when detecting the "edge" of combinatorial logic. Asynchronous data transfers require deskew of the bits which requires a time delay to wait for the slow bit and delays are not well controlled in silicon. How do you expect the FSM state chabges to be triggered? HDL is compiled to RTL before anything useful happens, so if the latest silicon does not handle RTL it is useless.

DKC
User Rank
Rookie
re: Reduce parallel programming pain with dataflow languages
DKC   6/28/2010 4:34:35 PM
NO RATINGS
You can also just add the language features of HDLs (Verilog/VHDL) to (say) C++, and get a not-so-new language that handles data-flow & event-driven programming - http://parallel.cc The main issue is that neither shared memory or synchronous (RTL) design styles work efficiently on the latest Silicon. Going forward hardware design and software design are going to start looking very similar - asynchronous communication, FSMs, and lots of threads. http://www.linkedin.com/in/kevcameron

<<   <   Page 2 / 2
Flash Poll
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Top Comments of the Week