Over the years the semiconductor industry has overcome a wide range of technology issues and market pressures. Today it has reached an unparalleled level of innovation and success that has resulted in… even greater technology issues and market pressures.
Electronic Design Automation (EDA) companies can no longer help the semiconductor industry meet these challenges simply by creating bigger and better tools. Instead, EDA companies need to address the full range of technology requirements that their customers must reconcile with their business constraints.
What is required is for EDA vendors to focus on making silicon profitable for their customers. The way to do this is to concentrate on providing differentiated solutions and technologies that address all of the main business issues.
Silicon is #1
The pace of technological development is increasing at an exponential rate. It is difficult to imagine a world that did not contain electronic products like tablet computers, smartphones, digital cameras, and personal media players. It’s also easy to forget that such products simply did not exist a few short years ago.
Semiconductor manufacturers have been incredibly successful with regard to delivering more functionality and better and faster chips at an increasing rate. Visionary application of these chips has resulted in new products that do more, which in turn results in demand for… even more products that do even more.
Today’s lifestyle is both driving – and is driven by – the Connectivity, the Performance, the Integration, and the Technology of things. In the not-so-distant past, silicon chips typically performed only analog functions, or digital functions, or acted as memory devices. Also, devices typically performed a single main function; for example, a microprocessor was used to execute software, while an RS232 communications function would be implemented on its own device.
Things are changing. Today’s System-on-Chip (SoC) devices can combine massive amounts of digital functionality – including multiple processor cores and hardware accelerators totaling literally billions of transistors – with large analog functions and vast quantities of on-chip memory. And, in addition to the hardware portions of the device, a high-end SoC design may involve millions of lines of software code.
The semiconductor industry is also starting to experiment with 3D integrated circuits, which refers to silicon die being directly connected to each other before being encapsulated in the same package. In the past, multiple die have been mounted in the same package side-by-side or even on top of each other, but they have not been directly connected together. The next generation of 3D ICs will involve direct die-to-die connections, which will dramatically reduce power and increase die-to-die communication speeds.
Making silicon profitable
EDA vendors need to be focused on making silicon profitable for their customers. They need to concentrate on providing differentiated solutions and technologies that address time to market, product differentiation, cost, power and performance.
In the case of digital design implementation, users need a fully integrated RTL-to-GDSII flow for high-performance, high-complexity, low-power nanometer designs. At leading-edge process nodes new design challenges and tougher time-to-market requirements cannot be addressed by traditional point-tool flows. Designers need an integrated full-chip synthesis methodology that addresses all aspects of the design flow, eliminates time-consuming manual work, mitigates physical silicon effects, prevents the introduction of new errors– especially for changes that must be made late in the design phase – and ensures design closure.
Chip analysis has become a key bottleneck for many design teams. With design sizes often exceeding 10 million cells and requiring analysis across 100s of operating scenarios, the impact on schedules is severe. Design teams have been forced to address the limitations of current solutions by either investing in additional hardware and licenses or by cutting corners and restricting the number of scenarios that are analyzed, risking chip failure. These approaches are both expensive and cannot scale. What design teams need is a fast, high-capacity solution that can provide sign-off quality analysis on standard hardware in timescales of minutes to a few hours – and not days.
With regard to the analog portions of a complex SoC, designers require a comprehensive, state-of-the-art design platform specifically tuned to meet the current and future needs of analog/mixed-signal designers. Such a platform should deliver first-time-correct, predictable mixed-signal designs, without sacrificing performance, while shortening the design process by weeks. Also, by means of automated mixed-signal assembly and verification, the platform should provide an order-of-magnitude productivity improvement over other tool flows.
In the case of verification, analog and digital blocks have traditionally been verified independently with different simulation products that vary in accuracy. When analog and digital blocks are combined in one simulation, verifying them together usually requires some additional modeling techniques that only approximate circuit behavior. With this type of approach it is very common for engineers to spend a significant amount of time interpreting the results. In some cases engineers waste time chasing down false design errors; more often the result is that real design problems are completely missed.
As mixed-signal designs increase in size and grow more complex, the ability to achieve correct functional verification becomes very challenging. In fact, verification becomes virtually impossible for current simulation solutions once fully-extracted parasitic capacitances are introduced. In the case of today’s SoC components, designers need the ability to functionally verify mixed-signal System-on-Chip (SoC) designs seamlessly using a single engine and without the overhead of traditional solutions. And it’s not simply a matter of being fast and accurate; of equal importance is the ability to support a huge capacity, thereby allowing the verification of very large circuits with SPICE-level accuracy.
Be the best at what you do!
Some EDA vendors attempt to be all things to all people. They try to lock customers into complete end-to-end flows, but it simply isn’t possible for one company to do everything well – there is no “one size fits all” in EDA.
EDA vendors should aim to be the best at what they do, and to actively collaborate with other EDA vendors who are the best at what they do.
About the author
Behrooz Zahiri is vice president of business development at Magma Design Automation (www.magma-da.com), responsible for corporate business strategy and market solutions. Previously Zahiri was vice president of product marketing for Magma’s flagship RTL-to-GDSII product lines. Before joining Magma Zahiri served as Actel’s director of marketing for FPGA software products, EDA business development, application engineering and customer support. He has also held various engineering positions at Intel Corporation.
The author of numerous IC industry articles, Zahiri has 17 years of experience in computer and IC chip design. He holds a master’s of science degree in Electrical Engineering from Stanford University and a bachelor’s degree in Electrical Engineering and Computer Science from University of California, Berkeley.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.