LAKE WALES, Fla. -- Cray Inc.'s new CS-Storm accelerated cluster supercomputers -- the Cray CS-Storm 500GT and the Cray CS-Storm 500NX -- boosted their artifical intelligence (AI) capabilities with massive arrays of Nvidia Tesla graphic processor unit (GPU) accelerators for super-deep machine-learning.
Deep learning can be nearly linearly accelerated by Nvidia GPUs, with up to 35,840 Cuda GPUs available to divide and conquer AI applications. The Nvidia accelerators are tightly integrated with the latest Intel Xeon "Skylake" processors on the 500GT and with the Intel Xeon E5-2600 v4 “Broadwell” processors on the 500NX. The Nvidia Tesla P40 or P100 PCIe GPU accelerators are available the 500GT, while the 500NX sports the Nvidia Tesla P100 SXM2 GPU accelerators.
The new Cray CS-Storm 500GT and CS-Storm 500NX are optimized for artificial intelligence with massive arrays of Nvidia Tesla graphic processor unit (GPU) accelerators that perform deep machine learning.
Together the Cray CS-Storms provide as much as 187 tera-operations per second (TOPS) per node, or 2.618 TOPS per standard rack for deep machine learning applications. Both supercomputers use the standard Cray programming environment, Sonexion scale-out storage and cluster management algorithms.
Cray has committed major resources to deep machine learning to its supercomputers. Offering both Nvidia accelerators as well as Intel Xeon Phi accelerators on different models (although all models use Intel Xeon as their main processors except for its aging Cray Urika-GX analytics platform for CPU-based machine learning using Spark MLlib and a Cray Graphs Engine).
— R. Colin Johnson, Advanced Technology Editor, EE Times