The 'EDA classic' tool model goes back to the days when designers worked at the gate level and consequently the emphasis was on using gates to do logic and to optimize line length for timing. Those factors are not nearly as important today so it's time to focus on the strong points of the technology. One thing is the speed and density of embedded memory. Another is that the ability to clock many primitive blocks in parallel is not as important as in the past. Implementing function in a procedural language is more natural although has a performance penalty. A high level language can be used for the function and then parsed to load small memories that operate in parallel to do the function. Where HDL is more appropriate, it too can be used in a similar way. It is like an extension of the look up tables in FPGA's.
The potential for using variable size memory blocks is enormous. We have to get away from breaking everything down into primitives and to take advantage of other means. The use of IP in OOP software is mature and mapping Verilog modules into classes is a way to integrate the hardware function with the software during development and then to run the code on embedded
processors that do not require that code to be broken down again to primitive instructions ties it all together. By the way, the processors are cheap, fast, and run in parallel at a macro rather than primitive level.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.