PARIS – Autonomous cars, field drones, surveillance cameras, medical imaging diagnostic tools or factory control/inspection robots… the list goes on. Designers of these vision-enabled embedded systems are beginning to look seriously at machine learning as a means to differentiate their products and dramatically improve system intelligence.
Combining computer vision with machine learning, however, mostly remains a theoretical goal for many system designers. Artificial intelligence remains a moving target for many engineers, and designing a machine-learning inference engine requires significant hardware expertise.
There is no single one-size-fits-all inference engine designed for the deep neural networks a system designer could use in a broader set of embedded systems.
At Embedded World in Nuremberg, Germany this week, Xilinx is hoping to change this perception by rolling out what it calls a “reVision” stack.
Xilinx designed the stack to “enable a much broader set of software and systems engineers, with little or no hardware design expertise to develop, intelligent vision guided systems easier and faster,” said Steve Glaser, senior vice president of corporate strategy at Xilinx.
While talking to customers who have already begun developing machine-learning technologies, Xilinx identified “8 bit and below fixed point precision” as the key to significantly improve efficiency in machine-learning inference systems. “To gain the best response time, we need to make sure our customers can create a streamlined data flow from the sensors through inference and control,” said Glaser.
Other mandates laid out by Xilinx customers include reconfigurability to support the latest neural networks, algorithms and sensors, and support for “any-to-any” connectivity to legacy or new machines, networks and the cloud, Glaser added.
Karl Freund, senior analyst for HPC and Deep Learning at Moor Insights & Strategy, told EE Times, “Artificial Intelligence remains in its infancy, and rapid change is the only constant.” In this circumstance, Xilinx seeks “to ease the programming burden to enable designers to accelerate their applications as they experiment and deploy the best solutions as rapidly as possible in a highly competitive industry,” he explained.
Xilinx’s approach in designing machine learning inference systems deviates from the trail blazed by CPU/GPU vendors such as Nvidia or Intel.
Loring Wirbel, a senior analyst at The Linley Group, explained, “The traditional path taken by CPU/GPU vendors is to start with a large architecture for training-based learning - Nvidia's Tesla P100, Intel's Knights Landing - and prune down the system for unsupervised learning, using single-precision or half-precision architectures – such as Nvidia P40, Intel Knights Mill.”
Wirbel noted that the merchant semiconductor players, in summary, look to academia to decide the least-precise architecture they can use for inference learning. “This is still a moving target,” he added.
“What’s interesting in Xilinx's software offering,” said Wirbel, “this builds upon the original stack for cloud-based unsupervised inference, Reconfigurable Acceleration Stack, and expands inference capabilities to the network edge and embedded applications.”
Wirbel added, “One might say they took a backward approach vs. the rest of the industry. But I see machine-learning product developers going a variety of directions in trained and inference subsystems.” At this point, he believes there's “no right way or wrong way.”
Next page: Xilinx reVision Stack