SANTA ROSA, Calif.—Intel's future processors at 10-nanometer and beyond will continue to use CMOS for cores, but the cores will be surrounded by novel circuit architectures using new materials that may extend Moore's Law indefinitely.
"Moore's Law was never about scaling, but about the economic benefits of putting more die on wafers," explained keynote speaker, IEEE Fellow Kevin Zhang, vice president of Intel's Technology and Manufacturing Group, also Intel Director of Circuit Technology who led processor development from the 90-to-22 nanometer nodes. "Intel is adding new circuitry, such as adaptive voltage control that increases yields over using fixed voltages, by making its analog circuits digital or at lease digitally assisted, and by exploring new materials for specific functions around the scaled CMOS cores."
IEEE Fellow Kevin Zhang, vice president of Intel's Technology and Manufacturing Group, also Intel Director of Circuit Technology who led processor development from the 90-to-22 nanometer nodes.
(Source: EE Times)
Zhang's keynote was titled "Circuit Design in Nano-Scale CMOS Technologies" at the International Symposium on Physical Design 2016 (ISPD, April 3-6, Santa Rosa, Calif.) ISPD 2016 is an Association of Computing Machinery (ACM) conference on next-generation chips sponsored by Intel, IBM, Cadence, Global Foundries, IMEC, Synopsys, TSMC, Xilinx and other stellar chip makers worldwide.
Zhang used the static-random-access-memory (SRAM) as his first example, because its architecture has remained the same for the last six generations, even though its use as on-chip caches for multi-core processors has become increasingly important (since DRAM speeds are not keeping pace with multi-core processor's speeds).
DRAM technology is not keeping up with processor performance (lower right) forcing more and more SRAM caches to be put on the same chip as the processor.
SOURCE: EE Times
"You need bigger and bigger SRAM caches on processor chips, despite they aging design, because DRAM has not been about to keep up with processor performance," said Zhang. "You can mitigate the problems with SRAM with 3D, but the best way is merely to improve the size and performance of planar on-chip SRAM memory caches."
For the last 20 years, the venerable SRAM has essentially remained unchanged, with only minor improvements. However, going forward to the 14-nanometer node and beyond, Intel has been tinkering with the design of SRAM cells to allow them to continue scaling. The big problem with scaling SRAM further, is the growing conflict between reading and writing conditions. Namely, according to Zhang, you can easily improve the read access time to SRAM, by minimizing the disturbance to the circuit while reading, plus you can improve the writing performance by maximizing the disturbance to the circuit, but "you can't do both at same time."
[Learn to implement crypto-security on bare-metal ARM Cortex-M processors at ESC Boston]
"No longer is progress just about scaling, but for last few years it has ben about introducing new transistor architectures and new materials," said Zhang, including high-k dielectrics, metal gates and 3-D FinFET transistor architectures.
Intel is taking its formerly analog circuits on its processors, here a temperature sensor, into completely digital circuits--such as using bipolar-junction-transistors (BJT) instead of analog transistors and op-amps and converting the traditional voltage-controlled-oscillator (VOC) into a digital-controlled-oscillator (DOC).
(Source: EE Times)
Using SRAM as an example, its new architecture has been to get the best of both read/write worlds by "turning the supply voltage like a knob." according to Zhang. Specifically, on-chip circuitry now changes the supply voltage when reading and writing, using lower voltage for write column section, and increasing supply voltage for reading column selection, thus mitigating the low-disturbance/high-disturbance problem, respectively, into a happy medium that improves the overall performance of the SRAM cell.
The power to the SRAM arrays, when idle, is also being cured, so that 99 percent of the array can be kept in sleep mode, while applying the low-or-high voltage supplies only to the SRAM cells being addressed at any one time—called dynamic sleep modes for SRAM.
Intel has also been continuously digitizing the analog functions on its processors, because analog circuits do not benefit from scaling like digital circuits. For instance the voltage scaling and temperature sensing circuits for power minimization and to prevent thermal runaway, respectively, have been converted to digital circuits. For instance, analog voltage controlled oscillators (VCOs) have been converted to Digital Voltage Controlled Oscillators (DCO), analog transistors for bipolar junction transistors (BJTs) for thermal runaway sensing, along with all the other analog functions inside the typical analog phase-locked-loop (PLL) that looks in processor frequencies.
Intel aims to keep its core processors in traditional CMOS, but surround it with new device architectures using new materials such as magnetic memories, Qubits, GaN transistors and whatever else proves to give leading edge advantages.
For circuits that Intel engineers have not yet figured a way to completely digitize, they are instead using hybrid mixed signal "digital assistance" techniques to optimize duty cycles, such as those used in its latest 14-nanometer processors to boost input/output speeds to 40-gigabits per second (GHz).
Future is adaptive design
The future of continued scaling is dependent on adaptive power management and voltage scaling, according to Zhang, which is needed to manage power with voltage scaling downward especially during sleep periods, but still with enough voltage to keep SRAM alive. Using a variable bias—instead of fixed—next-generation transistors an be dynamically tuned using adaptive control biases that depending on the unique character of each chips's transistor so on each die produced.
Today, if passive control is used to bias on-chip transistors, many die must be thrown away bus reducing yields, but with adaptive biasing control, those previously "bad die" can be optimized to perform just as well, or even better, by dynamically controlling their bias levels.
Adaptive voltage control will also be used to maximize performance and yields on nodes beyond 14-nanometer by sensing exactly the right minimum supply voltage for SRAM sleep modes on a die for die basis. The optimal read and write voltages will also be adaptively be changed for lowest power and maximum performance during both read and write operations.
For the future, Zhang believes that the CMOS cores, improved with the adaptive methods above, will remain the heart of future processors beyond 10-nanometer. However, he also predicts that a potpourri of new materials, such as gallium nitride, magnetic materials, III-V materials, Qubits and more will serve as peripheral support technologies to its adaptive CMOS cores.
"Innovations continue to be the driving force into our future processors with CMOS remaining at is core," said Zhang. "Future technology scaling will demand even more innovative circuit designs to achieve optimal benefits in process, circuit and design automation, which need to be co-optimized for future success in technology scaling."
— R. Colin Johnson, Advanced Technology Editor, EE Times