One of the cool things about the Sandy Bridge is that since the graphics are on the chip, they have used the same Boost technology to the GPU that they use on the CPU. There was a demo showing 10 1080P HD clips being played back at IDF.
I don't know how that horsepower will be used but I bet someone will use it. The comments about CPU overkill make me chuckle, because they are always wrong. Archimedes said: "Give me a place to stand and I can move the world". Moore's law has given many brilliant people a place to stand.
Sandy Bridge is very much like Tunnel Creek Atom-based SOC, with integrated graphics and memory controller. It looks like Intel has no choice but to put more features that were previously off-chip (as chipsets) onto the same chip as the CPU, just to distinguish itself. It's the same trend to SoC. Where will it end? Would they even try putting memory and/or analog on the same chip as the CPU?
Already the software industry is struggling to tame the multicore monster and now this heterogeneous stuff is being splashed at them, mercy! I wonder how Intel plans to support the software developers other than giving them some bits to tweak in the hardware. They were devoting quite a lot of energy on their software tools with purportedly multicore programming support. Unless these companies churning out different hardware architectures by the day collaborate with software industry in bringing solutions, all of these will be destined for the very niche markets.
They failed in the GPU+CPU marriage but looks like they want to leverage that work somehow and keep their future interesting.
"There are no exclusive DX11 games out today, and DX11 is around the corner for Intel based products," said Tom Piazza, an Intel fellow who led the graphics core design.
Then why don't we wait "around the corner" as AMD are DX11 compliant and all it takes is that service pack or killer app to relegate this to the trash.
I've got to wonder how much it matters. I just ordered the parts for a new HTPC, and I went with a low-end i3 540 CPU. Anything more than that would be a waste. I got more bang by using an SSD as a boot drive. Even that is enough to do graphics decode, even for Blu-Ray.
Granted some applications always need more CPU. They are getting to be few and far between, though.
The challenge with any complex computer architecture is to be able to optimize the software to take full advantage of its potential power. I will be interested in seeing what the realizable performance will be with a wide variety of software processes.
This is an important issue since the existence of Apple Computer is in part based on the unfulfilled need for an optimizing compiler for for the Motorola 68000.
The challenge today is not the hardware, rather the software to control them. Anyone in this industry understands that the software is the limiting factor to performance. It is not just going heterogeneous; what is the software roadmap. These systems could become unmanageable that they may fail in performance.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.