LAKE WALES, Fla.—Intel will announce its intention to acquire Nervana Systems at its Intel Developer Forum next week (IDF 2016, San Francisco, Calif., August 16-to-18)—a bid to obsolete the graphics processor unit (GPU) for deep learning artificial intelligence (AI) applications.
Intel dominates the high-performance computing (HPC) market, but Nvidia has made significant inroads into deep learning verticals with its sophisticated GPUs. However, Nervana Systems (Palo Alto, Calif.) has already made a significant dent in Nvidia's Cuda software for its GPUs with Nervana's Neon cloud service that is Cuda-compatible. Intel, however, is acquiring Nervana for its promised deep-learning accelerator chip, which it promises by 2017. If the chip plays out as advertised, Intel will sell Deep Learning accelerator hardware boards that beat Nvidia's GPU boards, while its newly acquired Neon cloud service will outperform Nvidia's Cuda software.
Intel’s Diane Bryant, executive vice president and general manager of the Data Center Group, with Nervana’s co-founder Naveen Rao.
"This is not a slam dunk by Intel over Nvidia," Karl Freund, senior analyst for Deep Learning and HPC at Moor Insights & Strategy (Austin, Texas) told EE Times. "But it makes a lot of sense in a market that is growing very very fast. Graphics processing units (GPUs) are the prominent way to train deep-learning neural networks and Nvidia is the leader. Intel has its multi-core Xeons and Xeon Phi's and with the acquisition of Altera, its FPGAs, but it doesn't have GPUs. Acquiring Nervana is a way of getting into the deep-learning market not by copying the general-purpose GPU strategy, but by offering a specialized coprocessor specifically designed for neural networks."
Nervana's 8-terabit per second Engine chip is a silicon-interposer based multi-chip module with terabytes of 3-D memory surrounding a 3-D torus fabric of connected neurons each using low-precision floating-point units (FPUs). As such, it can pack many more deep learning calculations-per-second into a smaller silicon chip that competitor's general-purpose GPUs, according to Freund.
Training deep learning networks involves moving a lot of data, and current memory technologies are simply not up to the task. The Nervana Engine uses a new memory technology called High Bandwidth Memory that is both high-capacity and high-speed, providing 32 GB of on-chip storage and a blazingly fast 8 Tera-bits per second of memory access speed.
"Deep learning neural networks can get away with much lower precision calculations that a general purpose GPU which are overkill in theory. Nervana hasn't released any benchmarks yet, and they won't have silicon until next year, but a special purpose chip for deep learning neural networks will likely outperform the same algorithms run on a general-purpose GPU," Freund told EE Times.
Intel claims that 97 percent of the world's servers deployed to support machine learning use the Xeon and Xeon Phi processors, but that less than 10 percent of the servers worldwide are deployed for machine learning. However, Intel also claims that machine learning is the fastest growing form of artificial intelligence (AI) and thus it wants to be ready with the Nervana Engine 3-D torus fabric-based deep learning neural network to keep from losing market share to GPUs.
"Artificial intelligence is transforming the way businesses operate and how people engage with the world.
Machine learning, and its subset deep learning, are key methods for the expanding field of AI," said Intel’s Diane Bryant, executive vice president and general manager of the Data Center Group in her blog about the acquisition.
Intel will incorporate Nervana's algorithms into its Math Kernel Library for integration with its industry standard frameworks. In addition, the Nervana acquisition will net Intel Nervana's Neon so that it can add Nvidia-compatible deep-learning to its portfolio of cloud services.
According to Freund, Nvidia may have to respond with its own low-precision specialized deep learning processor to stay competitive with the Intel+Nervana proposition.
Nervana's team of 48 engineers and executives will join Intel's Data Center Group run by Bryant.
— R. Colin Johnson, Advanced Technology Editor, EE Times