There has always been much hand-wringing in the electronic design automation (EDA) business about growth. We look at the markets we serve – semiconductors and electronics – and see enticing upward-bound curves as electronics permeate more of our everyday lives. We discuss and debate how EDA should participate more fully in that trajectory, painfully aware of our single-digit growth rates. We constantly ask ourselves how EDA can participate more fully in the value chain that it enables. We search for our own version of the killer app, the iPad of EDA, that will finally ignite a dramatic take-off in sales.
It’s unlikely to happen, at least in a global and disruptive way like the iPad. Despite the never-ending need for innovation to deal with IC complexity, historically EDA has always been a replacement business serving a relatively finite market of users. When you have automation as part of your segment definition, we should expect that customers will only pay at best a small premium to what they can actually do themselves. We sell new tools that replace old tools to pretty much the same group of users. This has been predictable in the EDA classic perspective because the tool business has been tied tightly to Moore’s Law of shrinking geometries and increasing complexity. Evergreen, but a replacement business. These factors inherently limit the industry’s growth and prevent the emergence of any kind of consumer-driven home-run product.
That’s not to say we can’t grow, and grow nicely. On a macro level over the past 30 years, EDA has seen upticks with each fundamental shift in end markets, which track in fairly close alignment to major expansions in the EDA business: 1) In the 1980s the PC era began (and commercial EDA was born) 2) In the 1990s the Internet Age dawned and true front-end CAE with RTL/synthesis-based methodologies took hold with the new application specific integrated circuit (ASIC) business model 3) In the 2000s portable wireless connectivity drove new growth and EDA grew with technologies that enabled lower power, small form factor designs with the new commercial foundry model
Today, we stand at the early stage of what can best be described as the Convergence Era, as all three previous macro shifts combine with some recent trends into new types of products and markets.
At a micro level within EDA, there have been potential “break out” strategies to address emerging new global markets (e.g., China, India), provide new levels of incremental (non-replacement) functionality (e.g., ESL, DFM), and even stretch the definition of EDA by getting into new peripheral businesses (e.g., semiconductor IP, software development, design services). While none of these have yet proven potent enough to radically change the scale of our business, they do represent opportunities to build on our base at a slow linear rate.
On either level, EDA’s fundamental value has always been to make design engineers more productive in the face of increasing complexity so that companies can respond better to market opportunities while lowering risk and the cost to development.
Use of IP in an SoC is analogous to OOP, so much can be taken from OOP software development. The EDA tools today do nothing to assist the designer in the early stages, but rather seem to assume that the design is complete at day one. So synthesis optimization is part of the first compile when the design is not complete so it will throw away anything that is not totally connected then only says "sy6nthesized away these nodes". On the other hand an OOP compiler gives meaningful error messages. Another thing the OOP source editor provides selectable information for classes and methods at entry time. HDL source editors offer absolutely no help. Real chips are made up of and/or/invert, not if/else/case/always. The IP should be defined as a class so it can be compiled/instantiated along with the software in the system design. Then the function should be mapped onto the chip. Since the IP class would have been derived from the hardware design, it would be a matter of instantiating the IP modules.
Food for thought.
Your point about risk introduced by having at least two distinct teams (software at one end and silicon implementation at the other) is an important one. I agree that the hard boundaries drawn between the 'hardware' team and the 'software' team make it easy for cracks to open up - cracks which unrecorded architectural requirements or assumptions get lost in.
A breed of SoC engineer that understands the big architectural picture, the tools and techniques used at each stage and the methodologies for keeping the implementation in line with the system-level models would be of great value. Until there is a critical mass of such engineers, I wonder if the best tools in the world can be readily adopted...?
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.