Breaking News
Design How-To

Reduce parallel programming pain with dataflow languages

6/27/2010 09:00 PM EDT
Page 1 / 4 Next >
More Related Links
View Comments: Newest First | Oldest First | Threaded View
<<   <   Page 2 / 2
User Rank
re: Reduce parallel programming pain with dataflow languages
KarlS   6/28/2010 5:36:28 PM
In figure 3, what is the advantage of pipelining over simply having each core run on separate data streams? DKC: What makes RTL synchronous? HDL blocks are typically edge triggered and typically use a clock edge so that only 1 signal is involved, otherwise bad things called glitches occur when detecting the "edge" of combinatorial logic. Asynchronous data transfers require deskew of the bits which requires a time delay to wait for the slow bit and delays are not well controlled in silicon. How do you expect the FSM state chabges to be triggered? HDL is compiled to RTL before anything useful happens, so if the latest silicon does not handle RTL it is useless.

User Rank
re: Reduce parallel programming pain with dataflow languages
DKC   6/28/2010 4:34:35 PM
You can also just add the language features of HDLs (Verilog/VHDL) to (say) C++, and get a not-so-new language that handles data-flow & event-driven programming - The main issue is that neither shared memory or synchronous (RTL) design styles work efficiently on the latest Silicon. Going forward hardware design and software design are going to start looking very similar - asynchronous communication, FSMs, and lots of threads.

<<   <   Page 2 / 2 Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed