Bill Banta, CEO of Centr, added that his company's experience with the US Army, for example, helped them develop a 360-degree machine vision technology that can identify certain objects -- such as people or tanks -- both forward and backward.
Centr earlier this year completed its development project for the US Army. Its panoramic camera was already in use by Fox Sports last summer, demonstrating that it can stream and broadcast live the 360-degree video images in real-time. While working on these projects, Centr has been determined to bring down its embedded vision technology into a consumer product at a consumer price.
Asked about building blocks that enable the small consumer panoramic video camera, Alioshin first mentioned the computationally intensive Movidius chip. "It can move a lot of pixels in a highly parallel manner," he said. Centr also had to develop a lot of custom hardware in order to miniaturize the camera. The CTO talked about a plastic lens, unique power source (with a curved shape to fit in the design), a uniquely shaped PCB, and the aggressive electronics layout on the board, "because there is no room to put a fan on the board."
Bier, asked about Movidius, explained that among various vision processing companies who tend to offer their technology as a silicon IP, Movidius has taken a different path. It has become a chip company.
The Movidius chip's key advantage is its speed and low power, noted Bier.
Many vision processors are designed to either process for machine vision or enhance image quality. The Movidius chip is designed to do both, said Bier. By integrating image signal processing (ISP) and vision processing, Movidius ends up with a chip closer to an image sensor, Bier explained. By sharing the same hardware and closing the distance from an image sensor, Movidius can run its chip at low power.
At this point, Movidius, however, has not disclosed details of the chip's architecture. A potential drawback to Movidius chip is "programmability," said Bier. It's a problem with any chip on specialized exotic architecture, he said. "That's a negative that's not limited to Movidius.
NeoLab's Neo 1, at first impression, looks a lot like digital pens pioneered by Anoto and now available as Livescribe. Eddie Lee, NeoLab's CTO, however, is hoping that people will recognize his Neo 1, when launched in the US market this summer at $150, as "a lot smarter" than conventional digital pens. What those digital pens do is to see and track handwriting on special paper. The pen serves essentially "as a digitizer," Lee told us.
What Neo 1 hopes to do is to add "environmental recording" apps.
Beyond just digitizing handwriting and wirelessly connecting it to a smartphone or tablet, Neo 1 will be able to "track down everything around you" at a time when you are writing or drawing with your digital pen, Lee explained.
That includes "the atmosphere of the place," he said. The pen and its apps capture things like it was a stormy night that day when you were writing that letter. The pen could also record samples of ambient voices and music.
Neo 1 synchronizes analog and digital worlds.
Click here to watch the video demonstration of Neo 1 pen.
As to the basic building blocks in the Neo 1, Lee listed: an infrared camera incorporated at the tip of a pen that sees the patterns of a special paper, CPU, memory, and Bluetooth connectivity.
Lee added that particularly critical is the control block. It synchronizes the movement of a hand and image, and blurs can occur. The system needs to eliminate them.
Neo 1 Pen also needs to manage information on the embedded paper. The pen can extract information, by following the owner of the paper, page numbers, x-y coordinates on the paper, time stamps, pressure of handwriting, and other elements.
Lee explained that Neo 1 deploys a combination of three processors: two ARM 9 processors and CogniVue's APEX Image Cognition Processor (ICP). First, one of the ARM 9 processor reads out images from the image sensor, then sends it to Cognivue's ICP. The ICP then segments image into blocks, rearranges the data and calculates the segmented data in parallel processing. Cognivue's ICP, then, hands off the binary file to the second ARM which observes the neighborhood of where the handwriting was taking place and encodes information embedded on the paper.
Lee noted that the three processors work simultaneously and need to process 100 to 160 images per second.
For those interested in learning more about embedded vision, the Embedded Vision Summit is scheduled on May 29 in Santa Clara, Calif., where CogniVue, for example, will be demonstrating its vision processor used in the Neo 1 pen.