MADISON, Wis. – As the race for artificial intelligence heats up, hardware is back in vogue.
Look no further than Google’s Tensor Processing Unit (TPU), SoftBank’s acquisition of ARM (SoftBank hopes to be a big player in AI), and now a venture-backed startup rolling out a family of “Deep Learning” computers.
That startup is Wave Computing, based in Campbell, Calif. The six-year-old company came out of stealth mode Thursday (July 21), revealing its design of a massively parallel dataflow processing architecture called the Wave Dataflow Processing Unit (DPU) for deep learning.
Derek Meyer, Wave Computing CEO, told EE Times, “In order to accelerate deep learning, the world needs a new computing architecture.”
Traditional computer architectures are designed for control flow-oriented applications. In contrast, deep learning demands an architecture that can process a huge amount of unstructured data in a parallel fashion, he explained.
Rather than developing a “way too expensive” custom ASIC, Meyer said Wave Computing has leveraged the well-understood dataflow technology and designed a new processing unit that “can natively support Google TensorFlow and Microsoft CNTK.”
The company’s new chip, built using 16-nm process geometry, is in the last phase of tape out, according to Meyer. Wave Computing already claims a handful of customers. “We will support our lead customers under the early access program starting later this year,” he said. General availability of Wave’s Deep Learning Computers will be in 2017.
Wave Computing plans to unveil details of its architecture in September at the Linley Processor Conference in Santa Clara, Calif.
On a broader level, the Wave Dataflow Processing Unit promises appeal to anyone frustrated with the inadequacies of current hardware architecture – CPUs, GPUs – for artificial intelligence.
Joanne Itow, an analyst at Semico Research Corp., noted the proliferating uses of AI. These range from IBM’s Watson (i.e. populate a database with as many observations as possible to develop probability outcomes and most likely scenarios) to speech recognition, fraud detection (used by banking/financial institutions) and language translation.
Those unhappy with the deep learning performance of CPUs and GPUs (from such incumbent players as Nvidia) are looking into new solutions, with an eye on a dataflow architecture.
The traditional von Neumann architecture of computing is based on “control flow” that offers the order in which individual instructions or function calls are executed.
In contrast, in dataflow, the order of instruction execution is unpredictable, depending on input availability. Dataflow architecture with no program counter does not indicate where a computer is in its program sequence. It shows non-deterministic behavior.
Changing computing landscape
Although the dataflow concept has been around for a long time, none of the attempts to use it in general-purpose computer hardware have been commercially successful, according to Kevin Krewell, principal analyst at Tirias Research. “But the computing landscape has changed, opening up a new opportunity in machine learning.”
This has worked out well for Wave Computing. The startup has been exploiting its expertise in dataflow technology and building a patent portfolio since 2010. It decided to steer its focus toward machine and deep learning about three years ago.
Next page: Google's TPU vs. Wave's DPU