For decades, silicon technology development has been shaped by the growth of the PC industry and the need to continuously increase the performance of digital transistors. A maturing PC industry and a rapidly growing mobile market are changing the dynamics.
Will ARM trickle up to high performance or will Intel trickle down to low power?
A changing silicon landscape
For nearly four decades, silicon technology development has been shaped primarily by the growth of the personal computing industry and the need to continuously increase the performance of digital transistors. Over the years, transistors have continually become smaller, faster and cheaper in line with
Gordon Moore’s observation that accurately predicted the era of CMOS scaling.
A maturing PC industry and a rapidly growing mobile market are changing the dynamics within the silicon landscape. Moore’s Law in the post-PC era will be defined more by the quest to integrate increasing levels of functionality on a chip. The success metrics in the new landscape will be not just higher transistor performance but higher system functionality, smaller system footprint, lower system cost and lower power. These changes will provide opportunities for new players while posing challenges for leading incumbents.
Intel and ARM – CPU v SoC
Two competing camps are defining the new landscape. On the one hand is Intel, the leader in silicon process technology and computing architecture (x86).
Pitched against the vertically integrated Intel is a lateral ecosystem anchored around ARM, the leader in low-cost/low-power architecture. While Intel pioneered the era of the CPU, ARM is enabling a massive design and foundry ecosystem and ushering in the era of the mobile SoC (system-on-a-chip). In the CPU space, chip functionality is largely determined by the computing core (e.g. Pentium, Athlon) and transistor performance is the critical metric.
In the SoC space, the core is just one among a variety of IP blocks that are used to independently deliver functionality. Intel’s SoC technology has typically been implemented 1-2 years behind its mainstream technology, which historically has focused on transistor scaling and performance. The foundries within the ecosystem instead focused on integrating disparate functional IP blocks on a chip while also aggressively scaling interconnect density. As the market becomes increasingly driven by low-cost/low-power consumer electronics and SoC shipments dominate total silicon volume, one might expect that Intel will look to expand its position in the SoC space.
Meanwhile, the ARM ecosystem is steadily making inroads into the high-end space traditionally dominated by Intel. This trend is illustrated by Microsoft’s tagline for its new operating system (OS) “Windows 8 – Designed for SoC”. The predominance of the Intel-Microsoft partnership based on x86 architecture is coming to an end as Microsoft’s flagship OS will now run on a wide array of mobile SoC application processors (APs) from partners like Qualcomm and NVIDIA.
The emergence of the SoC era is thus a strategic inflection point for both Intel and the ARM ecosystem alike.
Moore’s Law in the SoC era
The semiconductor manufacturing industry is being redefined as the CPU takes a backseat to the mobile SoC. Concurrently, the industry is also approaching the limits of technology scaling itself. Eventually, making transistors smaller will result in diminishing returns on key metrics like performance, power and cost. That will signify another inflection point when in the words of Gordon Moore, “making things smaller won’t help anymore”.
In a prescient 2001 interview, Moore posited that inflection point to occur somewhere between 2010 and 2020. As Moore’s Law slows down and the SoC era dawns, the rising influence of the design ecosystem will shape the silicon roadmap more than the traditional metric of cost-per-gate. CMOS technology will need to adapt to this changing landscape as it continues to innovate in the new era. Several technology trends indicate a progression toward the inflection point alluded to by Moore.
Since the 90nm generation, Intel has led the way in defining new transistor architecture. Intel’s choice of technology was at times questioned, but ultimately vindicated by the inability of the foundries to deploy an alternative option. At least three waves of innovation support this – SOI (IBM) v. bulk (Intel) (1990s), biaxial (IBM) v. uniaxial strain (Intel) (early 2000s) and metal-gate-first (IBM-alliance) v. metal-gate-last (Intel) (late 2000s). The latter wave in particular has already changed the contours of the industry driven by the sheer complexity and cost needed to master it.
TSMC began shipping metal-gate technology-based wafers in 2011, nearly 4 years behind Intel! The fourth wave of transistor innovation (non-planar tri-gate, 2010s) will further change the contours of the industry. With tri-gate, Intel will significantly widen its lead in transistor performance and complexity.
Foundries that try to emulate Intel will bear an enormous cost burden in silicon development, tooling and design portability. The foundries also risk an uncertain development timeline since the compatibility of tri-gate with full SoC integration (digital/analog/passives/RF) remains unknown. Not only does Intel have the financial might to confront this uncertainty, it also has the advantage of setting its own design rules for its own architecture and products.
In contrast, the foundries have to support disparate customer needs and may not be able to afford highly restrictive design rules which bloat area density.
Companies which opt to continue using planar technology will face a lesser cost/time burden but will be challenged to innovate as they try to extract superior performance-per-watt from planar transistors. Tri-gate architecture and the limits of the planar transistor will signify another major inflection point in the CMOS scaling trend.