For decades, silicon technology development has been shaped by the growth of the PC industry and the need to continuously increase the performance of digital transistors. A maturing PC industry and a rapidly growing mobile market are changing the dynamics.
Will ARM trickle up to high performance or will Intel trickle down to low power?
A changing silicon landscape For nearly four decades, silicon technology development has been shaped primarily by the growth of the personal computing industry and the need to continuously increase the performance of digital transistors. Over the years, transistors have continually become smaller, faster and cheaper in line with Gordon Moore’s observation that accurately predicted the era of CMOS scaling.
A maturing PC industry and a rapidly growing mobile market are changing the dynamics within the silicon landscape. Moore’s Law in the post-PC era will be defined more by the quest to integrate increasing levels of functionality on a chip. The success metrics in the new landscape will be not just higher transistor performance but higher system functionality, smaller system footprint, lower system cost and lower power. These changes will provide opportunities for new players while posing challenges for leading incumbents.
Intel and ARM – CPU v SoC Two competing camps are defining the new landscape. On the one hand is Intel, the leader in silicon process technology and computing architecture (x86).
Pitched against the vertically integrated Intel is a lateral ecosystem anchored around ARM, the leader in low-cost/low-power architecture. While Intel pioneered the era of the CPU, ARM is enabling a massive design and foundry ecosystem and ushering in the era of the mobile SoC (system-on-a-chip). In the CPU space, chip functionality is largely determined by the computing core (e.g. Pentium, Athlon) and transistor performance is the critical metric.
In the SoC space, the core is just one among a variety of IP blocks that are used to independently deliver functionality. Intel’s SoC technology has typically been implemented 1-2 years behind its mainstream technology, which historically has focused on transistor scaling and performance. The foundries within the ecosystem instead focused on integrating disparate functional IP blocks on a chip while also aggressively scaling interconnect density. As the market becomes increasingly driven by low-cost/low-power consumer electronics and SoC shipments dominate total silicon volume, one might expect that Intel will look to expand its position in the SoC space.
Meanwhile, the ARM ecosystem is steadily making inroads into the high-end space traditionally dominated by Intel. This trend is illustrated by Microsoft’s tagline for its new operating system (OS) “Windows 8 – Designed for SoC”. The predominance of the Intel-Microsoft partnership based on x86 architecture is coming to an end as Microsoft’s flagship OS will now run on a wide array of mobile SoC application processors (APs) from partners like Qualcomm and NVIDIA.
The emergence of the SoC era is thus a strategic inflection point for both Intel and the ARM ecosystem alike.
Moore’s Law in the SoC era The semiconductor manufacturing industry is being redefined as the CPU takes a backseat to the mobile SoC. Concurrently, the industry is also approaching the limits of technology scaling itself. Eventually, making transistors smaller will result in diminishing returns on key metrics like performance, power and cost. That will signify another inflection point when in the words of Gordon Moore, “making things smaller won’t help anymore”.
In a prescient 2001 interview, Moore posited that inflection point to occur somewhere between 2010 and 2020. As Moore’s Law slows down and the SoC era dawns, the rising influence of the design ecosystem will shape the silicon roadmap more than the traditional metric of cost-per-gate. CMOS technology will need to adapt to this changing landscape as it continues to innovate in the new era. Several technology trends indicate a progression toward the inflection point alluded to by Moore.
Transistor leadership Since the 90nm generation, Intel has led the way in defining new transistor architecture. Intel’s choice of technology was at times questioned, but ultimately vindicated by the inability of the foundries to deploy an alternative option. At least three waves of innovation support this – SOI (IBM) v. bulk (Intel) (1990s), biaxial (IBM) v. uniaxial strain (Intel) (early 2000s) and metal-gate-first (IBM-alliance) v. metal-gate-last (Intel) (late 2000s). The latter wave in particular has already changed the contours of the industry driven by the sheer complexity and cost needed to master it.
TSMC began shipping metal-gate technology-based wafers in 2011, nearly 4 years behind Intel! The fourth wave of transistor innovation (non-planar tri-gate, 2010s) will further change the contours of the industry. With tri-gate, Intel will significantly widen its lead in transistor performance and complexity.
Foundries that try to emulate Intel will bear an enormous cost burden in silicon development, tooling and design portability. The foundries also risk an uncertain development timeline since the compatibility of tri-gate with full SoC integration (digital/analog/passives/RF) remains unknown. Not only does Intel have the financial might to confront this uncertainty, it also has the advantage of setting its own design rules for its own architecture and products.
In contrast, the foundries have to support disparate customer needs and may not be able to afford highly restrictive design rules which bloat area density.
Companies which opt to continue using planar technology will face a lesser cost/time burden but will be challenged to innovate as they try to extract superior performance-per-watt from planar transistors. Tri-gate architecture and the limits of the planar transistor will signify another major inflection point in the CMOS scaling trend.
very well written, look fwd to reading part II, would be interested in the author's thoughts on how die stacking by 3D TSV will upset the current deadlock ( CPU by smaller transistors vs SoCs by ?? transistors ).
I agree that scaling is hitting physical and economic limits and that innovation will likely be centered on extending the life of an existing geometry. Yet, I would like to suggest that the more effective path to using existing manufacturing tools, process technology, transistor structure, etc. is to use monolithic 3D. NAND vendors are already going there and many should follow. The advantages of monolithic 3D are quite significant in device integration, power and performance, and will not need new transistor technology
I agree with you. From the typical usage model of end user, now they don't know why they need such a high performance CPU. The reason why they buy a PC is because PC helps them to get information from internet, play some casual game.
Before a new popular usage case is found, such as popular AI usage at home, cpu performance is engough. While the other factors, such as disk speed, wireless, power, cost, total system size, are the impact factor for the end user.
As internet development, people are much more earier to find the different architectur software, so architecture restriction becomes a less restricted factor.
As the future for human being is to collabration not to fight, so the SOC represents the internal desired spirit of human. Thus can bring more innovation and reduce cost.
Well written! The major rule for wafer fabs is Take no Risks--protect and preserve that huge capital investment so you can pay it down. Change gets equated to risk, especially for wafer fab managers and product developers. So, what’s least risky among the choices of continued scaling, new devices/architectures, 3DIC with TSV, or monolithic 3DIC? I try to parse the variables in a blog post: http://www.monolithic3d.com/2/post/2012/01/is-monolithic-3d-ic-less-risky-than-scaling-or-tsv.html
Hi Zvi, don't you think there needs to be a fundamental shift in IC vendors' business model -that with more progress in SoC and 3D IC's (monolithic & stacking), software is an important component of the offering? To that end, shouldn't the IC companies be working to provide application platforms for developers to spring-off of? I know some of them do but it is inadequate and certainly not accelerating the innovation we all like to see in 3D.
@Pushkar Ranade: great writeup! I hope you continue to post on EE Times.
Thanks all for the comments.
Today, on-chip integration is the ideal solution for cost, power, form factor and reliability. Die stacking and TSV are viable options to extend the life of CMOS once traditional scaling ends. A key pre-requisite for die-stacking is that total package thermal design power (TDP) needs to be low. Hence, the power dissipation of each die/layer needs to be minimized before they can be stacked – which is an even bigger issue for stacked die than for traditional monolithic solutions. Another consideration is price point of stacking compared to more on-chip integration.
I think this is all very interesting... clearly we have a growing problem in our ability to advance processor performance. However, I'm of the opinion that we can't overcome this with better transistors (at least not for long). We need a new architectural paradigm - the traditional approaches are fundamentally out-dated. My bet is on highly efficient, small massively paralleled manycores...