My engineering career began in 1970 and I was using the 8086 in 1976. It's always somewhat of an odd feeling to still see the x86 label being referenced. I would never have imagined it back then. Kudos to Intel for sustaining the product line.
The ratio of brand advocacy to informed commentary seems extraordinarily high in many of the sentiments above.
Knights Corner gets most of those flops from a very wide SIMD micro-architecture. I happen to really LIKE SIMD micro-arches, and have done quite a lot of programming for them, and from what I see of the nLRBI (that's what the instruction set was called when the device was "Larrabee") it appears to be a very-well thought out SIMD ISA, far better than SSE.
But the claim that ordinary "scalar" procedural programs written in C, Fortran etc are automatically going to be accelerated to Tflops ... simply isn't so.
If you can't exploit the SIMD width efficiently ... its a 2-issue x86 core which isn't all that different from Atom. It's the SIMD extensions that make this design "powerful." Auto-vectorizing compilers haven't lived up to the hype so far (for any microarch ... GPGPUs included).
AMD advocacy is misplaced here, because so far as I know, AMD isn't trying to compete in specialized HPC processors and/or adjunct accelerators. The competition is IBM with its spectrum of Cell/Power7/BlueGeneQ processors, and to some extent the nVidia Kepler+ARM initiative.
It's going to be an interesting competition ... I wouldn't make any predictions of success. Folks should remember that both Cell and Power7 have successively not "conquered the HPC world," and for those thinking that Intel has avoided such experiences .. remember Itanium? Or for that matter that Knight's Corner is an updated Larrabee?
wow, good eye, TimeMerchant! Not his office, no, it was the wine cellar of a restaurant in Seattle where the briefing was being held.
He wouldn't give out any details about the wattage per chip, but it certainly wasn't 20MW! That is the theoretical aim for an exascale system running these chips....
He also wasn't specific on the number of cores, sticking to the "more than 50" party line... but my bet is 64 cores with some being deactivated in order to improve yields.
Impressed with the wine rack in the background. Is this in a restaurant or a nicely fitted out exec's office? Seeing a chip in the air with a silkscreened logo but no decent specs is like deciding which bottle to pull from the rack without reading a label. I suppose "actual silicon with specs to follow" is better than "two years of specs with silicon to follow" from Xilinx, Altera etc. At least the wine will improve by the time it hits mainstream. Initial specs are impressive, even this sceptic must admit. Well done Intel, particularly on that tired x86 architecture. Heat dissipation? Must be less than what can be pulled out through a heatsink to prevent solder reflow, so to the 20MW per chip quip above, unlikely. Even 200 Watts, look at those city nightlines and switch off two light bulbs, then get back to work.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.