Intel focused on high-end, quad-core client processors for gamers, content creation and other apps at the event where the so-called ultrabooks were noticeably absent. The company plans to roll out lower voltage dual-core Ivy Bridge chips geared for ultrabooks in a few weeks.
“We have been moving capacity to the ultrabook [processor designs] to make sure the ramp is unconstrained as possible,” said Kirk Skaugen, the newly named head of Intel’s PC client systems group.
Skaugen said OEMs have 21 designs in the works for the 18mm thin ultrabooks and as many as 100 more in earlier planning stages. The “ultra-dense” systems use “mechanical [designs] never seen in an x86 system before,” he said.
It’s typical for CPU makers to roll out a full design first and a “chopped” version of the SoC with fewer cores later, said Kevin Krewell, senior analyst with the Linley Group (Mountain View, Calif.). Nevertheless, he express some surprise Intel did not tailor any of its quad-core chips for a perhaps slightly thicker ultrabook design.
Intel said a third of the 1.4 billion transistors in the new die are dedicated to its 3-D graphics cores, providing twice the performance of the graphics cores in Intel’s existing Sandy Bridge. The big question going forward will be whether AMD’s upcoming 32nm Trinity CPUs, expected within weeks, will maintain a lead over Intel in graphics performance, Krewell said.
Intel’s rivals churn out more quickly software drivers optimized for their graphics designs, a key factor in getting good GPU performance, he said.
“AMD and Nvidia deliver new graphics drivers every couple months, often with each major new game,” Krewell said. “Intel started increasing its focus on graphics drivers with Sandy Bridge, but it has not caught up with the rivals yet,” he added.
In terms of hardware, Intel increased the number of graphics execution engines in its design from 12 to 16. The move was part of a re-architecting of the graphics core to better compete with AMD, breaking with Intel’s tradition of not changing process technology and chip design at the same time.
Intel is poised for another leap in graphics performance with its next-generation family called Haswell, a new microarchitecture optimized for the 22nm process. Skaugen said Haswell is on target for 2013.
Beyond Haswell, Intel plans to return to a tick-tock model, updating process technology in one generation and design in the next. Skaugen called the move to a new graphics architecture in tandem with the new 22nm process “an educated risk” on Intel’s part.
Ivy Bridge and its associated I/O chips now also support PCI Express 3.0, USB 3.0 and, optionally, Thunderbolt, a high speed interconnect based on PCI Express 2.0. Skaugen said 21 Thunderbolt devices are now in the market and predicted there will be 100 by the end of the year and hundreds by the end of 2013.
“Thunderbolt is moving from the Apple platform to Windows and multiple PC platforms,” he said.
The chips also embed a number of new security features including Intel Insider, a hardware-based video copyright protection technology Skaugen said will be supported by movie services such as Cinema Now. It will be used for streaming movies over a new low latency version of Intel’s Wi-Di, a Wi-Fi variant for short-range video links.
In total, Intel rolled out 13 quad-core Ivy Bridge processors, ten associated I/O chips and five Centrino wireless devices. The CPUs include single- and dual-threaded CPUs with 6 or 8 Mbytes L3 cache. They range in power consumption from about 35 to 77W and in price from about $174 to $1,096.
Ultrabook versions of Ivy Bridge chips are coming in two months said Kirk Skaugen.
At the high end, Intel is as far behind in GPU as AMD is behind in CPU. At the low end, you get what you pay for.
Any improvement by Intel in graphics will narrow the gap. The gap is widening faster than Intel can narrow it. I postulate that Intel does not follow Moore's law in GPU performance while others have.
If graphics ability is the light that shines, Intel is so far away that it is still in the dark.
WRT the person requesting benchmarks, there are plenty available. You usually have to make a purchase to get baseline data, however. I have observed an increase in ~2X at system level about every 3 years. The generational increase seems to trump the within generation variation. Even the indiginous graphics (i.e. Intel HD3000)can now run CAD packages that used to require a special (read expensive) graphics card.
My four-year old laptop just gave up the ghost. I will be looking at these very seriously. I suspect that the benchmark comparisons will be coming very quickly. Several of the PC enthusiast sites have been hinting that they have sample Ivy Bridge-based systems in hand and under test.
That article is based on one mans opinion! A quote, "likely that Ivy Bridge will match Trinity for 17W designs" and this is based on what evidence?
AMD states that the 17w chip will be the same level performance as a 35w Llano A83500, so my guess is, that trinity at 17w will still be a good 30% faster (GPU) than the equivalent ivy bridge chip. Roll on HSA!
I was actually impressed with what Intel has done with Ivy GPU from reading David Kanter's article, linked by vdara above. The decision of what technology to include is quite complicated. The economics of real estate, power and risk are daunting. Intel, and the industry have much at stake.
Check this out,
"Putting this all together, Intel will substantially narrow the gap with AMD for integrated graphics capabilities in 2012. Actual product level performance depends on pricing, binning and the market. For instance, Intel has an edge for very low power designs due to process technology. The 22nm FinFETs are exceptionally efficient at low voltage and it is likely that Ivy Bridge will match Trinity for 17W designs. At 25-35W for conventional notebooks, Intel should trail by around 20%, which is close enough to be competitive. Looking to desktops though, AMD will have a substantial advantage and the performance gap may be much higher."
Things do not look that rosy for AMD.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.