NEW YORK–Twenty years ago, imaging/vision was a subject matter that only camera manufacturers cared about.
Today, imaging has become a decidedly hot topic among a diverse world of system engineers, including those who build automotive electronics, mobile handsets, tablets, PCs, digital TVs and medical devices.
Imaging is now giving a machine the ability to see and understand the world. Sure, it ain’t perfect, but it is improving.
What has changed today is that system vendors no longer use imaging or vision technology just to capture images for keeps’ sake. They see imaging as a fundamental tool to “extract meaning from pixels,” as Jeff Bier (left), president of Berkeley Design Technology Inc., likes to say.
Imaging/vision can help an embedded system to detect, identify and track a person or object; it can help a system to diagnose a person’s health or track his emotion. Imaging can certainly assist a driver in safer driving.
You know how things go in the electronics world. Once chip companies start integrating a DSP IP core dedicated to imaging on an SoC, it suggests that system companies are damn sure wanting imaging solutions.
The mobile industry looking for embedded vision technology reminds me of the PC industry in 1990s.
Every time a new multimedia type--be it audio, 2-D/3-D graphics or video--popped up on the horizon for PCs, it drove the electronics industry to develop new chips, boards and eventually new PC architectures to accommodate the new multimedia type. That, in turn, generated fresh demand for new PCs.
Embedded vision technology is on the cusp of following a similar path. It’s positioned to drive a new generation of mobile handsets, tablets, digital cameras and automotive systems.
I understand how the embedded vision algorithms--originally derived from the field of computer vision or man-machine interface--are now coming downstream to handsets or any other embedded devices.
In fact, Ceva earlier this year released a new computer vision software library for the development of vision-enabled applications targeting mobile, home, PC and automotive applications. This is based on OpenCV, a standard library of programming functions for computer vision processing that leverages decades of hard work.
What I don’t quite get is exactly what specific embedded vision applications are prompting mobile handsets and any other systems to crave for something like Tensilica’s IVP or Ceva’s imaging core, rather than just doing away with plain old multi-core CPUs.
I popped the question to Bier.
EE Times: Why would embedded systems guys go for a specialized imaging processor?
Bier: Multi-core CPUs are very powerful and programmable, but not very energy-efficient. So if you have a battery-powered device that is going to be doing a lot of vision processing, you may be motivated to run your vision algorithms on a more specialized processor.
EE Times: What are good app examples for a battery-powered device that does a lot of vision processing?
Bier: Well, there are many, some real today and others on the horizon:
Smartphones increasingly do vision processing for applications like driver safety (see iOnRoad), augmented reality (see Vuforia), visual search (see Google Goggles), and gesture user interfaces (see eyeSight Mobile Technologies).
Another is digital cameras and camcorders, which need more-sophisticated features and performance to justify their existence when smartphones are ubiquitous.
Everyone keeps talking about smartphones displacing cameras, but despite the fact that I’m wedded to my smartphone, I have four DSCs and a camcorder that I use regularly. You already know about features like face detection and smile detection. More sophisticated features are coming, like face recognition.
While I was at Xilinx we developed the ZYNQ platform with Embedded Vision applications in mind and helped start the Alliance. With thousands of design wins and early successes in ADAS look for some cool products to emerge.
Pleased to see Alliance membership up to 30 companies now with many of them semiconductor and IP companies with interesting product roadmaps targeting vision apps.
CogniVue's (small semiconductor IP company in Canada) thesis in 2010 was that world needs an Image Cognition Processor (ICP) for efficient vision processing like the world need a GPU for 2D/3D graphics processing in the late 1990s. Bring on the ICPs.
Real-time face recognition and gesture recognition as part of next-gen UIs will bog down multi-core processors as the UI is on all the time. The depth map generation is in particular a big challenge. A specialized programmable vision core will be needed. Furthermore - there are no standards for vision - like video codecs - they will be different algorithms used.
Bring on the ICPs.
@freddsd3234343242: "Cheaper and better Imaging IP cores means another blow for the Japanese camera manufactures who already have tough competition from phones."
Not that tough. If you just want to capture a quick picture of something to upload to Facebook, your smartphone may be just the ticket. If you want to do real photography, a smartphone isn't what you use.
Digital cameras still rely on top-quality optics to capture the image. You can do a lot in software with what you capture, but you are ultimately constrained by what you got in the first place.
Smartphones may eat the market for low end cameras because they'll do about as good a job, but that's about as far as they'll go. If I'm Nikon or Canon or the like, I'm not quaking in my boots at this. I'm investigating what incorporating this into the higher end gear I sell might offer my customers.
The fact that imaging and vision algorithms are advancing leaps and bounds every day makes it a good enough reason to use a programmable IP core, I think. By the time you are ready with your new imaging ASIC to specifically tailored to one algorithm, the market may be already seeing the birth of another vision algorithm you may want.
Cheaper and better Imaging IP cores means another blow for the Japanese camera manufactures who already have tough competition from phones. Combined with their failing lithography devision, this could be the last blow for them.
Remember all the crappy mp3 and mp4 players from Taiwan and China? That was due to generic IP cores.
I still don't get why 'we need an imagine core'..
Should Canon or Nikon or Apple abandon their ASIC designs and instead buy one of these IP cores from Tensilica vs. Ceva?
Why?? What is the benefit of using a generic IP for all brands and devices?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.