Most HDLs that have been proposed are like Chisel and C~: registers are explicit, and events such as clock edges are implicit.
Notable exceptions are Verilog and VHDL. These HDLs follow the event-driven paradigm: registers are implicit and events are explicit.
Would-be HDL language designers seem to ignore this historical lesson systematically. The HDL language winners are the exception, and all those other HDLs are all but forgotten.
Why is that? Because the event-driven paradigm is much more expressive for modeling and better suited to verification (which is much harder than design itself). This is what the market wants and needs, not the focus on "full synthesizability" or "generating hardware".
In addition to SystemC, there is one more exception that I know of: MyHDL. MyHDL proudly follows the event-driven paradigm, like Verilog and VHDL. Of all the newer HDLs that have been proposed, it is the only one afaik.
Two basic reasons: (1) Hardware designers are reluctanct to move from HDL to C -- even to hardware-friendly SystemC, and (2) The productivity gains of HLS only apply to DSP-oriented designs -- number crunchers. Try designing a complex state machine or a serial comm interface with HLS. Yes, it can be done, but by the time it is done, the designer wonders exactly what the point of that exercise was.
yes these are big questions, we look forward to customers' reactions! Graphical design is a tough question, too. We already know it's not perfect for every use case in its current form, that said if done well graphical can be awesome. Anyway, it is already a huge improvement over how VHDL/Verilog/SystemC do it!
What we're saying is that software engineers may well get interested in designing hardware, but if they do they should not expect to dump a piece of C and expect the best hardware there is (HLS had promised that, I already blogged about why it will never be able to do that). However, if software engineers do get into hardware design and want a high quality of results, they would definitely go a lot faster with C~ than if they had to use VHDL/Verilog.
yes I'm aware of Chisel, I included it in my post on Synflow's blog Beyond RTL part 2: Domain-Specific Languages. I think Chisel is a good idea, like MyHDL and similar efforts. It has the advantage of being open-source, but being open or close source, or free or not, does not seem to have made a big difference in EDA historically.
Now back to Chisel, it is a bit like SystemC in that it is based on an existing, large, complex language (Scala is much better than C++, but it is far from being simple). Is the designer really free from having to deal with the Scala layer? Then it's a matter of taste, but I find Chisel's syntax complicated, and its semantics difficult to understand ('when' and 'unless' to do if/then/else but in a slightly different way).
the major difference is that C~ is a programming language dedicated for hardware, contrary to C/C++, which are software languages. Try translating 'malloc' or 'new' to hardware ^^ SystemC is somewhere in the middle, but I think it is not the solution.
C~ is above RTL, as it does not have an explicit clock, reset, or signals. The language is still cycle-accurate, so it's lower level than the code you would typically write with HLS. So it's kind of in-the-middle, we use a model of computation derived from dataflow process networks, which define computations in terms of action firings; in C~ a firing is executed in a cycle.
As I explained in the post I linked to above, extending C++ is probably not a good idea, it is difficult to parse, complicated to analyze, and you still have to deal with the C++ layer (hello ugly error messages).
I agree there has been a reluctance to move up to higher levels of abstraction but it is partly because HLS does not really work. Is definitely not sufficiently automated.
So you end up modeling at high-level and then designing at lower level and then spending a lot of effort to try and be confident that what you modeled at high level is equivalent to what you are designing at lower level.
And why has HLS not worked?
Here is my argument.
Because unlike RTL-to-gate-level -- where designers were willing to give up messing about with transistors and gates and dimensions of cells -- there was no obvious way to accept constraints (even at the IP core level).
And that has happened because there is not equivalent to the NAND completeness of logic at the gate level. Isn't it DeMorgan's Theorems that prove that all logic systems can be converted into NAND gates or NOR gates? Therefore being constained to using a limited set of gates allow all digital logic to be created. The constraint of using those predetermined gates allows synthesis to work.
There is no equivalent limited set of IP cores that could create all-imaginable logic circuits and so engineers are reluctant to accept any constraints on either IP cores or their dimensions and no obvious way to guide a synthesis engine.
Until we have automated provable synthesis of yet unimagined logic EDA is stuck working at multiple levels of abstraction and stuggling to prove their equivalence.
I run a large research program that standardized on Bluespec. We still use it but when we have collaborators working from home, it is tough to give them licemnses and Chisel's open source nature works well in that scenario,
Also Chisel let you use multiple models of computation which is convenient.
Both generate Verilog, so the existing tool flow does not change. Frankly you need to be hard core masochist to use Verilog, VHDL or System Verilog ! The HDL community has been a bit resistant to the idea of higher levels of abstraction.
By the way most research programs use one of these newer languages. I typically neversee Verilog or SV.
U Penn as part of the Crash Safe DARPA project uses Bluespec. Link to CPU source.