The 'EDA classic' tool model goes back to the days when designers worked at the gate level and consequently the emphasis was on using gates to do logic and to optimize line length for timing. Those factors are not nearly as important today so it's time to focus on the strong points of the technology. One thing is the speed and density of embedded memory. Another is that the ability to clock many primitive blocks in parallel is not as important as in the past. Implementing function in a procedural language is more natural although has a performance penalty. A high level language can be used for the function and then parsed to load small memories that operate in parallel to do the function. Where HDL is more appropriate, it too can be used in a similar way. It is like an extension of the look up tables in FPGA's.
The potential for using variable size memory blocks is enormous. We have to get away from breaking everything down into primitives and to take advantage of other means. The use of IP in OOP software is mature and mapping Verilog modules into classes is a way to integrate the hardware function with the software during development and then to run the code on embedded
processors that do not require that code to be broken down again to primitive instructions ties it all together. By the way, the processors are cheap, fast, and run in parallel at a macro rather than primitive level.
The Other Tesla David Blaza5 comments I find myself going to Kickstarter and Indiegogo on a regular basis these days because they have become real innovation marketplaces. As far as I'm concerned, this is where a lot of cool ...