SAN FRANCISCO – Intel gave its first details public look into Ivy Bridge, the first processors to use its 22 nm tri-gate technology. Intel plans at least four major variants of the chip which packs 1.4 billion transistors into 160mm2 in its largest version.
Ivy Bridge packs 20 channels of PCI Express Gen 3 interconnect and a Displayport controller, Intel’s first chip to integrate PCIe. The move marks one small step into the long term quest of what an Intel executive called terascale-class clients.
The first Ivy Bridge chip targets a range of desktop, notebook, embedded and single-socket server systems with up to 8 Mbytes cache. Like previous Intel parts it integrates a memory controller and graphics, now upgraded to support DDR3L DRAMs and Microsoft DirectX 11.0 graphics APIs.
“We spent a lot of time on the modularity of this die to create different flavors of it very quickly,” said Scott Siers, an Intel engineer who presented a paper on the chip at the International Solid-State Circuits Conference here.
Specifically the largest die includes four x86 cores and a large graphics block. It can be chopped along its x- and/or y-axis using automated generation tools to create versions with two cores or a smaller graphics block.
Siers said Ivy Bridge is Intel’s first client chip to support low power 1.35V DDR3L and DDR power gating in standby mode. It handles up to 1,600 MTransfers/s as well as 1.5V DDR3. A new write assist cache circuit provides an average 100 millivolt power reduction.
The Displayport block supports three simultaneous displays including one 1.6 GHz and two 2.7 GHz links with four lanes each.
The PCIe receiver uses a continuous time linear equalizer with 32 gain control levels and a transmitter with a three-tap digital FIR filter. The PCIe block also supports on die testing for jitter as well as timing and voltage margin measurements.
The chip’s x86 and graphics cores can scale in data rates at 100 and 50 MHz increments respectively. Overall, the chip supports five power planes and 180 clock islands that can be separately gated.
In a separate ISSCC keynote, Dadi Perlmutter, chief product officer for Intel, scoped out a long term vision of terahertz-class clients. Terahertz systems consume as much as three kilowatts today but could be reduced to 20W by the end of the decade using a broad variety of techniques, he said.
The techniques include optimizing chips to work at near threshold voltage levels, a subject of several Intel papers at ISSCC. Lower power internal and external interconnects are also needed, he said.
3-D IC packaging will be needed to lower power memory, Permutter said. Toward that end Intel is working with Micron and others on its Hypercube stacked memory design, he said. The design could boost memory bandwidth ten-fold while cutting power to eight pico-joules per bit, down from 50-75 pj/bit in today’s DDR3, he added.
In addition, “voltage regulation has to go into the IC itself because inductance is too big off chip or even on package,” Perlmutter said. “When you have a lot of voltage regulators to turn on and off it becomes very complex to do on a package, so we are working on getting power regulators into the ICs,” he said.
Perlmutter said he sees another 30 years of engineering needed in computing. “For people who thought they could retire from this industry, I say there’s a lot more to do,” he said.
Dadi Perlmutter scoped out challenges to get to terahertz clients.
Intel is doing well with x86 today, but I still believe that as ARM eventually pushes into laptops and even servers due to its superior power/performance architecture it will put ever more pressure on X86. Intel must plan for the day when X86 becomes more of a legacy technology than a driver of future growth.
Hardly. Even without AMD, Intel would be driven to ride Moore's Law by continuous shrinking of transistors and die sizes. Along with that comes lower power, higher performance, more integration, lower costs. To take the '386 example, if the industry had stopped at the '386, there would never be a motivation to buy new PCs and the market would be a lot smaller. The bottom line is Intel makes more money by driving costs down to lower ASPs so that they can sell a lot more CPUs. There's a lot more money to be made selling tons of PCs at $500 each than selling $10,000 PCs to just a few people.
A creative approach to develop a die that can be cut in different ways to yield different products. It also means that if there is a spot manufacturing defect, subsections can be salvaged as working product. It reminds me of the early days of calculators when I understand some low end calculators were made from high end chips with defective special functions (not supported on the low end product).
There are several technical triumphs here, but the real story to me is that Intel management has the discipline and the balls to invest in R&D when they already are the technology leader and in an (at best) uncertain economy. Kudos to Intel management and engineering.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.