The semiconductor is the key technology that has enabled the Internet to evolve into today's sophistication. Ironically, the Internet is one of the main contributors to what I'll call the sparse matrix problem, a large multi-dimensional matrix of ecosystems for system-on-chip (SoC) design teams to choose from. The ideal solution to deliver to customers is often not available.
As a result, the planning and execution of an SoC has lengthened instead of shortened, and it has been made riskier, not less so. It's important to examine the genesis of this sparse matrix, analyze the ramifications, and understand how ecosystem participants can successfully advance the industry.
At the dawn of the new millennium, the industry struggled with 130nm process technology and lack of market clarity. Processor choices for SoCs were rather clearly delineated by applications. Customer requirements and competitors' datasheets weren't nearly as available, so information was a highly prized differentiator. Life wasn't easier, but it wasn't complex.
Given this environment, arbitrage of market information to win in the marketplace was common and preferred. For example, Broadcom, through the acquisition of ServerWorks, correctly bet on DDR-DRAM instead of RDRAM and ended up generating a third of its 2002 revenue through this product line alone.
Fast forward to 2013, and it's scary. With relatively few tweaks in the process integration, TSMC has delivered four different variations of highly advanced 28nm process technologies, each reaching stable yields and each delivering clear differentiations on cost, performance, or power. ARM's recent introduction of the A12 processor has made clear the abundance of application processors depending on the time-horizon, performance, and power budget (A7, A9, A15, A53, A57, and now A12).
The design team just needs market information to make the right choices in this richly enabled ecosystem, and the Internet delivers. What was highly guarded Intel Developer's Forum (IDF) presentation material is not only widely available, but also just one of many sources of market, customer, and competitive data.
Thanks to blogs and tweets, rumors and gossip about Silicon Valley is only hours away, even if one lives 12 hours (or 12 1/2 hours, to be precise) away in India. The Internet contains more information today than most development team can consume -- and it's available for free.
The theory is that the transparency brought by the Internet enables us to make the most efficient choice and deliver just the right performance, power, and cost solution to a fickle customer base demanding the best at 10% lower cost every year. Right?
Well, I think the Eigenvalue should be easy to find, as the market requirement has become razor sharp. However, because the matrix is, in reality, sparsely populated, the design team quickly finds that the solution space doesn't converge to a nicely defined set and could be only local optimal.
To illustrate the assertion about the sparse matrix, let's examine a couple of data points. The separation among 28nm process variants is roughly 25% in cost, 10X in power, and 30% in performance from the worst to best. In theory, there is an ideal process for each application. However, given the huge ecosystem of IP required, both from within the company and from third parties, the availability starts to impact the choice and sometimes steers the decision away from the optimal.
For example, some IP requires specific I/O transistors, which in 28nm means 1.8 V or 2.5 V or even 3.3 V in certain applications. Let's say a specific 28nm process has been selected because of the broad IP ecosystem support. That doesn't mean all IP is available. Not anymore, unfortunately.
To get the most out of ARM processors cores, a specific performance-optimization package is needed for each processor targeting a specific process technology, even though processors are fully synthesizable Verilog RTL code these days.
Another one is embedded nonvolatile memory, where the qualification time can be as long as 12-15 months, since the memory bit cells are not supplied by the foundry. These memories require several analog/mixed-signal components built with I/O transistors. It probably doesn't matter which I/O voltage is used, but making sure it's the same as the SoC I/O voltage is paramount. Otherwise, you risk a delay of 12-15 months, which could be devastating to the overall project schedule.
At this point, one might be tempted to say these issues are not new -- perhaps slightly more evolved -- and have been the hallmarks of modern living with its abundant choices. This generalized assertion is correct -- now, we have multiple iPhones for us to ponder yet another decision in life -- but when choosing from a large and largely sparse matrix of possible process, IP, software, and so on, decisions are vastly different from ordering drinks at Starbucks.
In 2004, the two graphics giants of the time, ATI and NVIDIA, had to struggle through the decision between a less aggressive shrink to 110nm or a complete jump to 90nm. Their decision impacted the starting point of the project and, therefore, the time to market, and each route taken would require solving different challenges. Eventually, both survived the decisions. Because of the accountability of a singular decision point, I am guessing both companies received full commitment within the company to deliver the best products at the earliest available time.
Fast forward 10 years, and the problem is one of multiple points of vulnerability in a product decision. With a longer supply chain and a shorter insight to market, this is a different world.
If you are inclined to agree with me that things are more complicated, then perhaps you would also agree that the odds are more against startups than large semiconductor companies. Empirical data would seem to agree with this assertion. As Qualcomm, Broadcom, and Mediatek continue their growth, the sub-billion-dollar broad-line SoC companies have struggled mightily. Now the question is whether there is a new paradigm, new management practice, or new market segmentation where one could make the sparse matrix problem work in his or her favor.