Rumors of the SoC's impending death have been popping up in the semiconductor and systems industries. Are they exaggerated? Not entirely. A decreasing number of companies are investing in system-on-chips (SoCs). Likewise, the number of concurrent SoC projects that typical R&D organizations can undertake is shrinking. The reason: soaring design cost and poor schedule predictability.That makes SoC development increasingly difficult to justify. But does this foreshadow the SoC's complete demise? I doubt it, but these factors will surely chase more players from the market and drive greater use of alternative solutions.
A look inside the venture capital community reveals a bellwether of the trend. During the past five years, a declining number of firms have shown interest in investing in SoC start-ups (although a lot of money has poured into programmable devices aiming to unseat Xilinx and Altera). Only when the SoC's risks can be significantly mitigated and revenue projections credibly defended will investors even consider the opportunity. Even then, it's a tough sell.
Up until a few years ago, a customer of mine (chip company) routinely had six or seven concurrent SoC projects underway. Today, the number is a mere three, with a combined development cost of is $150 million to $200 million. To justify the investment, the three products must garner more than a billion dollars in sales revenue. The company has a good chance of meeting its targets, because it consistently hits its development schedules, but there is no guarantee that the expected volumes will fully materialize. In other words, even with best-in-class R&D execution and predictability, market uncertainty makes it a risky bet.
Systems companies developing their own chips (ASICs) encountered a similar situation starting in the second half of the 1990s. High cost, complexity and risk made ASIC development prohibitive except among those whose end-products boasted high profit margins and volume. The requirement persists today. Not surprisingly, many systems houses still developing their own ASICs struggle to rationalize the cost, especially those with poor schedule predictability.
Engineering labor consumes much of the SoC's development cost—team sizes typically range from 100 to 200 engineers. Such large teams result directly from the inability of R&D productivity to keep pace with growing design complexity. Simply put, to offset falling relative-productivity, more engineers are needed to achieve the throughput required to meet time-to-market constraints. More engineers means higher cost–although many companies attempt to reduce costs by off-shoring development.
Cost mitigation tactics notwithstanding, I suspect that we will continue witnessing the decline of the (dedicated) SoC, except among those companies boasting both market vision and consistent R&D excellence, which means delivering high-margin product on-time and within budget to high volume markets.
Ronald Collett is president and CEO of Numetrics Management Systems, Inc.
Another major factor is that SOCs have not really been complete Systems, only large chunks, as time moves on we are seeing higher levels of integration therfore reducing the number of SOCs in an actual system, eventually we will have only one per system with as much of the other componets part of it as well.
I agree with Ron in the decrease of SoC tapeouts. In the video ASIC industry there is a strong consolidation going on that is going to increase as Intel, AMD and NVIDIA add more HW-accelerated support for video codecs and video processing.
Unfortunately the pipeline seldom happens with such efficiency. Design engineers are often tied up in documentation, SW bring-up, early-access customer bugs and other less-than-ideal tasks for at least two quarters after tapeout.
Don't forget about the software. Typically customers are expecting Linux, Android, and WinCE ports along with some RTOS ports as well. With today's SoCs having 30 or more peripherals, including graphics, multimedia, and protocol accelerators that need to integrated into hihger level stacks, you often need dozens of SW engineers as well. Furthermore with the semiconductor-supplied software now running into tens or hundreds of megabytes, the application support burden becomes significant as well.
But SOC developments are pipelined. At the end of the 2nd year, the 1st SoC is validated and into the production; the 4th SoC is in design; the 2nd and 3rd SoC are in verification and bringup.
2years development cycle cannot justify the high labor cost. Instead, because of the design pipeline, the labor cost is lower than the $20~30M I put there.
"Today, the number is a mere three, with a combined development cost of is $150 million to $200 million."
How to get the estimation of $150~200M development cost?
The cost of 100~200 engineers per year is about $20~30M. Beyond that, the major costs are IP cost, license cost, fab and tools. Are those costs more than 80% of the SOC development cost? (150M-30M)/150M = 80%.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.