MADISON, Wis. — Movidius, a leading embedded computer-vision processor company, soon to be acquired by Intel Corp., has agreed with China’s Hikvision to penetrate artificial intelligence (AI) deeper into surveillance cameras.
The deal put Movidius in a direct contact with Hikvision (pronounced high-K vision), a Chinese company not only known as a major factor globally in the surveillance market, but also for its expertise in advanced visual analytics.
In a phone interview with EE Times, Movidius CEO, Remi El-Ouazzane, said, “Deploying Artificial Intelligence at the edge [of the network] is becoming a massive trend.”
Movidius, which has played a key role behind Google’s Project Tango, has been promoting its ultra low-power vision processing SoC in a number of embedded systems. The company has set its sights on accelerating the adoption of deep learning in a host of applications, including security cameras, drones and augmented reality (AR)/virtual reality (VR), as El-Ouazzane explained.
Aside from an extended collaboration with Google in neural network technology, Movidius has been working with DJI, in Shenzhen, China, the world’s leading maker of drones and aerial cameras.
Based on Movidius’ vision processor unit called Myriad 2, DJI earlier this year launched its Phantom 4 aircraft, with “the ability to sense and avoid obstacles in real time and hover in a fixed position without the need for a GPS signal,” as described by DJI.
Ranked No.1 in scene classification at ImageNet 2016
The deal with Hikvision is geared toward driving Movidius’ embedded vision processor into the world of security cameras.
Hikvision security cam
Although running Deep Neural Networks has historically required devices to depend on additional computing power in the cloud, Movidius’ ultra-low power Myriad 2 Vision Processing Unit can run advanced algorithms — like those developed by Hikvision — at the edge, inside cameras themselves, explained Movidius.
Hikvision isn’t just a volume manufacturer of surveillance cameras. Its core expertise lies in the development of Machine Learning-based advanced visual analysis.
In fact, Hikvision just last week won the “scene classification” category at ImageNet Challenge 2016.Organized by Stanford University, Carnegie Mellon University, University of Michigan, and UNC Chapel Hill, the ImageNet Large Scale Visual Recognition Challenge evaluates algorithms for object detection and image classification in large scale signals.
Other categories include object detection, object localization, object detection from video and scene parsing.
According to Hikvision Research Institute, its researchers used "inception-style networks and not-so-deep residuals networks that perform better in considerably less training time."
Sense, assess and decide
Embedded systems, capable of actions such as “to sense, assess and decide,” will only grow further, explained Movidius CEO El-Ouazzane.
Along with this growth, the industry has an increasing number of tech companies “trying to attack ‘Deep Learning’ from different layers and come up with new SoC platforms,” he observed.
A case in point is ThinCI (pronounced Think-Eye), a startup just out of the closet.
Considering the performance level required for neural network applications in a power-constrained environment, embedded vision processing needs a special, purpose-built architecture, El-Ouazzane explained. This trend is amplified by Moore’s Law which has recently shown signs of slowing down, he added.
Common to all three applications — security cameras, drones, AR/VR — is the demand for improvements in power consumption, explained El-Ouazzane.
But how exactly are neural networks applied to each of these applications?
Next page: Forensic analysis