SAN FRANCISCO – Intel rolled out its intentions for a soup-to-nuts offering in artificial intelligence, but at least one of the key dishes is not yet cooked.
The PC giant will serve up the full range of planned products it acquired from Nervana Systems. They will take on mainly high-end jobs especially in training neural networks, an area now dominated by Nvidia’s graphics processors.
Intel’s acquisition of Movidius has not yet closed, leaving a wide opening in computer vision and edge networks. Separately, the company announced several AI software products, services and partnerships.
Movidius’ chief executive made a brief appearance in a break-out session at an Intel AI event here, but could not say when the acquisition will close or what hurdles lay ahead. “We look forward to joining the family,” he said, after sketching out his plans for low-power inference chips for cars, drones, security cameras and other products.
Until the deal is consummated, Intel can’t fully start the work of creating a unified portfolio spanning the range of AI jobs. But that is clearly the intention.
The 2.5-D Nervana Lake Crest is geared to accelerate neural net primitives using 32 Gbytes of High Bandwidth Memory 2.
Click here for larger image
“AI will transform most industries we know today, so we want to be the trusted leader and developer of it,” said Intel chief executive Brian Krzanich in a keynote launching the half-day event.
Naveen Rao, chief executive and co-founder of Nervana, was the star of the show. Intel has given the green light for the full suite of his startups’ plans spanning processors, boards, systems, software and AI cloud services.
Nervana’s accelerator, named Lake Crest, will ship next year, promising significantly better performance on neural net jobs than today’s top-end graphics processors at similar power consumption levels. The chip, made in a 28nm TSMC process, is not yet in first silicon.
Rao gave a first look at the architecture which was developed from scratch to accelerate neural network primitives such as those used in Google’s TensorFlow framework. It is made up from an array of processing clusters running simplified math operations Rao called flex point. The approach uses less data than floating point operations to gain a 10x performance boost on calculations compared to chips made in the same process, he said.
Next page: Inside Nervana’s Lake Crest chip