This all changes when you get to the control logic. Control logic is the opposite of process: it is time dependent. The results, the output of the block, depend on when the inputs arrive. There are two types of control logic, as far as this discussion is concerned: reactive and non-reactive.
By definition, reactive control logic needs to respond to something. It involves tight interactions between blocks in the design, as in, for example, a request-response type of interaction. When a control block is triggered to do something and needs to come back with a response almost immediately, concurrency is required. The two processes need to run separately so you can model how one reacts to the other.
Now that time is part of the functional spec, it is part of what defines correctness, so it needs to be in the source code, and you need to model it. Therefore, although sequential C++ can be used for non-reactive control logic in processing pipelines—such as arbiters—SystemC is a more natural choice for reactive control because it allows explicit modeling of timing and concurrency. This has the added benefit of allowing designers to model the types of non-deterministic behavior common to reactive control.
Illustrating the usefulness of SystemC for expressing reactive control designs does not require a complex example. The key difference, in this case, between SystemC and untimed C++ is that SystemC allows the testbench to execute as a parallel thread to the DUT, enabling the testbench to react to any control requests coming from the DUT. The example shown below is a simple averaging filter that averages two values of x and writes the output to y. The average, or DUT, module provides a static base address and offset to the testbench, which is used to look up the values of x from some location in memory. The reading of x will block until the testbench provides the data using the address written to addr. In SystemC this is not an issue since the testbench is running concurrent to the DUT. When the DUT stalls, control is transferred back to the testbench, allowing it to react to the address supplied by the DUT. In sequential C++ the blocking read of x would stall the DUT but not return control to the testbench, leading to a simulation deadlock.
Maybe this example might help clear up some of the "huh?!" discussion: Your task is to HLS a block that crunches data in a certain way, and is interfaced into a certain system environment.
For the crunching part, you want to use the HLS tool to help you examine lots of microarchitectures, meaning implementation possibilities (various data widths, depths of pipelining, etc.). You may even want to create two or three different implementations at different price/performance points, but the tool must create the RTL for all of those different implementations. You only change a few microarchitectural parameters, pipeline-depth, for example, push a button and out pops radically different RTL.
In the meantime, however, there are interfaces to the world around that algorithmic portion that absolutely must proceed according to a very exact, and possibly intricate, timing definition. You have to ensure that those timing concerns are never violated. It's possible, for example, that a certain implementation of that algorithmic logic will not be able to process the data quickly enough to satisfy the data rates of the interface.
If that's the case, we want the HLS tool, not the designer, to limit the available choices of microarchitectures so that the bandwidth requirements of the interface will never be violated.
This article shows that SystemC is required to do any real design. Since SystemC is a class lib of C++, thus a superset, and since SystemC processes can contain pure untimed C++ code, shouldn’t the article be titled: “SystemC is the language of ESL”.
Also why does the article avoid the very popular TLM standard which enables easy separation of the interface from the computation and yields large simulation speedups (instead of inserting an RTL interface from a library)?
For those interested in how to do production design with SystemC, there's an archived EETimes webinar by Mark Warren aptly titled "Practical application of high-level synthesis in SoC designs".
"... but it has been proven that it is easy to extract parallelism from sequential sources."
This will come as news to everyone in the High Performance Computing community, who have been attempting to do this unsuccessfully for over 40 years. It will also be news to the authors of the numerous textbooks on parallel algorithms (if extracting parallelism was easy, why would we need them?)
Sam Fuller (CTO Analog Devices) and co-author Lynette Millett have the opposite opinion: "Experience has shown that parallelizing sequential code or highly sequential algorithms effectively is exceedingly difficult in general." in their article "Computing Performance: Game Over or Next Level", IEEE Computer, January 2011, pp. 31-38, reporting on the NSF-sponsored study by the Computer Science and Telecommunications Board of the US National Academy of Sciences.
Why is the example given for these HLS tools always a trivial datapath block such as an FIR filter?
It gives the impression, rightly or wrongly, that the tools are only good for simple pipelines. I am not losing sleep over those sorts of designs.
DKC - yes, i was a bit perplexed by the title as well. Multiple HLS technologies should co-exist in a single work-flow, not only these. It is really in what application you are designing for and what VnV activities are required- which can really impact the quality of the product and TTM.
What war? SystemC is just a C++ class library. It would pretty lame not to be able to have them coexist.
How about analog and power? When will we see -
"The wait is over: C++ and Spice coexist in a single flow"?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.