CUPERTINO, Calif. – At a Hot Chips event dominated by machine learning it was clear just about everyone is doing something in AI even though no one agrees on just what to do.
Microsoft showed impressive results running neural networks on a software platform based on Intel FPGAs. One of its engineers dismissed the emerging crop of machine learning ASICs as inflexible.
Google showed similarly solid results using its Tensorflow ASICs. Google’s neural networking guru Jeff Dean (below) in a keynote encouraged others to design accelerators for low precision linear algebra.
Amazon called it Day One for FPGAs in the data center, encouraging engineers to use its FPGA-based cloud services to design their chips. A Baidu engineer said FPGAs can’t handle the data center’s diverse workloads and called for a hybrid architecture he called an XPU.
Image: EETimes, other images courtesy of Hot Chips.
Among chip designers, Intel pitched four of its chip families as different tools for skinning the neural networking beast. Two startups — ThinCI and Wave Computing — showed competing architectures for machine learning, and three research efforts showed AI-related designs.
Graphics rivals AMD and Nvidia described their latest GPUs. Vega and Volta represent very different design points, an expert noted, but both aim to stake out strategic positions in machine learning.
Rounding out the event, a RISC-V proponent called for a new approach to chip design. Among a handful of more traditional talks, Cisco described a networking processor, and IBM showed an event focused on the future of machine learning that the mainframe running Cobol is still relevant.
See the following pages for observations and images on the top talks.
Next page: Microsoft goes soft on FPGAs