so intel finally made it to 1 Tflops. been able to buy this level of perforance for 2 or 3 years in a single slot pcie card. what is the flops/watt, flops/$ installed (floor space / cooling ...), memory bandwidth, max concurrent threds ?
who care anymore what instruction set is used in HPC. when programming in C / C++ / cuda / opencl / java / perl / and a few hundered programming languages ... all that is abstracted away. code will have to be ported in either case (from intel single thread to intel multi-thread, MIC or OPENCL/CUDA). if you are going to go through the effort of porting, ISA is not that important a factor. install cost, operating cost, tools availabity, feature support, perf/$$ are more improtant. don't buy the intel hype. MIC is still just a research project @ intel. You can actually buy AMD and NVIDIA products with 3rd party tools support.
For HPC, very little has been "abstracted away" since C was invented. Only the first four languages you list (plus Fortran) are actually used for HPC. Intel is promising the ability to use one (mature!) language/compiler/toolchain for both CPU and GPU/MIC code. This wouldn't be practical if the hardware wasn't x86(ish) ISA.
Intel might face some competition but in my opinion they have just gone too far to catch in terms of the technology. Now the question is how they would capture the imagination of the consumers to gain control of the tablet/ultrabook segments.
These are old Pentium III cores. No instruction level parallelism, no out of order execution, etc. Only I/O interface is PCIe. Only advantage this has is the tool chain and that you can prototype your code on normal multicore x86 workstations and move to MIC later. Plus, the cores can work independently. GPU cores can't really work on separate processes. They are too interconnected.
My engineering career began in 1970 and I was using the 8086 in 1976. It's always somewhat of an odd feeling to still see the x86 label being referenced. I would never have imagined it back then. Kudos to Intel for sustaining the product line.
Impressed with the wine rack in the background. Is this in a restaurant or a nicely fitted out exec's office? Seeing a chip in the air with a silkscreened logo but no decent specs is like deciding which bottle to pull from the rack without reading a label. I suppose "actual silicon with specs to follow" is better than "two years of specs with silicon to follow" from Xilinx, Altera etc. At least the wine will improve by the time it hits mainstream. Initial specs are impressive, even this sceptic must admit. Well done Intel, particularly on that tired x86 architecture. Heat dissipation? Must be less than what can be pulled out through a heatsink to prevent solder reflow, so to the 20MW per chip quip above, unlikely. Even 200 Watts, look at those city nightlines and switch off two light bulbs, then get back to work.
wow, good eye, TimeMerchant! Not his office, no, it was the wine cellar of a restaurant in Seattle where the briefing was being held.
He wouldn't give out any details about the wattage per chip, but it certainly wasn't 20MW! That is the theoretical aim for an exascale system running these chips....
He also wasn't specific on the number of cores, sticking to the "more than 50" party line... but my bet is 64 cores with some being deactivated in order to improve yields.
The ratio of brand advocacy to informed commentary seems extraordinarily high in many of the sentiments above.
Knights Corner gets most of those flops from a very wide SIMD micro-architecture. I happen to really LIKE SIMD micro-arches, and have done quite a lot of programming for them, and from what I see of the nLRBI (that's what the instruction set was called when the device was "Larrabee") it appears to be a very-well thought out SIMD ISA, far better than SSE.
But the claim that ordinary "scalar" procedural programs written in C, Fortran etc are automatically going to be accelerated to Tflops ... simply isn't so.
If you can't exploit the SIMD width efficiently ... its a 2-issue x86 core which isn't all that different from Atom. It's the SIMD extensions that make this design "powerful." Auto-vectorizing compilers haven't lived up to the hype so far (for any microarch ... GPGPUs included).
AMD advocacy is misplaced here, because so far as I know, AMD isn't trying to compete in specialized HPC processors and/or adjunct accelerators. The competition is IBM with its spectrum of Cell/Power7/BlueGeneQ processors, and to some extent the nVidia Kepler+ARM initiative.
It's going to be an interesting competition ... I wouldn't make any predictions of success. Folks should remember that both Cell and Power7 have successively not "conquered the HPC world," and for those thinking that Intel has avoided such experiences .. remember Itanium? Or for that matter that Knight's Corner is an updated Larrabee?
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.