What do people in the EDA and semiconductor industries expect to happen in 2013? This clearly establishes the directions that they intend to focus on and where they will place their dollars...
A few weeks ago, I asked many people in the industry for their predictions for 2013. I separately asked for those related to technology in general, to the EDA industry and for business predictions. In the first part I covered technology predictions. Today, we look at those related to EDA and the IP industry, and as can be expected most follow the party line fairly closely.
They appear in the order in which they were received.
Mike Santarini - Publisher of Xilinx’s Xcell Journal
To create products that achieve new levels of system functionality will require tools that can handle system functionality while offering greater degrees of automation for system design. We believe ESL will make a great move into mainstream design methodologies, especially in the All Programmable SoC, 3D IC and FPGA spaces.
Mike Gianfagna - VP of Corporate Marketing, Atrenta
The slowing of Moore's Law will continue to cause issues in 2013. Design teams will stay at mature nodes longer, and some will skip nodes completely to avoid hitting the learning curve more than once. We'll see more 2.5D designs (silicon on interposer). This is another cure for the Moore's Law problem.
Beyond uptake of 2.5D ICs, the IP industry will mature. Due to the critical need for predictable IP reuse, IP consumers will begin to demand uniform quality of deliverables from their IP suppliers. This will create opportunity for companies who can measure and report IP quality. It should create a more vibrant industry with higher IP sales and more design starts.
Rick Stanton - Director of ENOVIA Strategy for Semiconductor and ALM Experiences, Dassault Systèmes
I expect the evolution in EDA to continue rising about individual features and functions and to focus more on "whole product" development.
The importance of semiconductor and software in mechatronics will help fuel domain initiatives in areas like automotive electronics and mobility.
Adnan Hamid - Chief Executive Officer, Breker Verification Systems
One thing is certain for 2013: SoC functional verification will be even harder than it was in 2012. Industry observers report that the long-predicted crisis has materialized, with verification consuming the majority of resources on SoC projects. In response, many teams have attempted to treat verification much as they treat design, as a hierarchical problem. They verify each IP block, generally well, and assemble the blocks into a complete chip, often with minimal top-level verification.
Unfortunately, most verification IP (VIP) developed at the block level is not reusable at the full-chip level. The Universal Verification Methodology (UVM) provides some guidelines for reuse but writing a virtual sequencer to tie together all blocks is hard. However, shortchanging chip-level verification is not the answer since end-to-end user and performance scenarios cannot possibly be run on individual IP blocks.
A solution known as graph-based scenario models that capture intended behavior of the IP blocks is emerging. These can be combined to create scenario models for major subunits or the complete SoC, where they can automatically generate self-verifying C test cases to run on multiple heterogeneous embedded processors within the SoC. 2013 will see broad adoption of this approach, verifying the complete SoC while finally providing a level of verification reuse matching that of design reuse.
Dr. Zhihong Liu - Executive Chairman, ProPlus Design Solutions
If circuit designers thought 28nm challenges were difficult, they’re about to find their jobs have gotten a whole lot harder at 20nm due to increased random variations and the effects of layout dependencies, plus double patterning.
The current approach to yield prediction and design optimization will be improved in 2013. It has to be. Currently, circuit designers mostly rely on foundry model libraries, selectively run process, voltage and temperature (PVT) corner analysis and limited Monte Carlo analysis resulting in inconclusive information on design effects.
I also predict the community will pay far more attention to identifying ways to accurately model 20nm process effects in SPICE models and properly apply them in circuit design and yield analysis. They’ll look for solutions, such as a DFY toolkit that can handle process variations with accurate SPICE models, a fast and reliable statistical simulation engine and hardware-validated sampling technologies.
Circuit designers will find the solution that’s more practical, reliable, faster and integrated in the form of statistical yield analysis software.