Bier: For example,
a camera can tag that this is a picture of Barbara and Fred, and then find
the other pictures you’ve taken recently that include those same
EE Times: Hmm. That may be useful for Facebook posting… What about embedded vision apps for systems other than handsets and cameras?
Somewhat counter-intuitively, vision-based automotive safety
applications are also very power-constrained. Of course, there’s a big
battery available in the car, but the issue is getting the heat out.
You might have Advanced Driver Assistance Systems (ADAS) completely
contained in a rear-view mirror assembly, for example, and that sucker
gets hot when you’re car’s been sitting all day in the parking lot in
Phoenix in August. But when you turn the key, it’s got to work 100 percent of
This means you need a very low-power system that
dissipates very little heat. And ADAS is coming on big. For example, the 2013
Honda Accord is the first non-luxury car to have vision-based safety
features as a factory option.
EE Times: The power issue of ADAS is something I had not thought about before.
Bier: OK, one more and then I’ll stop. You know that LiveScribe “smart pen” you have? [Full disclosure: Bier gave me this LiveScribe pen a year ago.]
That thing runs all day on a tiny battery, and it’s doing computer
vision every time you write with it--that’s how it tracks pen strokes
and identifies which page you’re on.
EE Times: In
other words, when you have to keep capturing images and
processing/tracking at the same time, you do need a low-power solution.
I am shifting a gear here. Do you have any market data that shows a
growth rate of mobile devices that are equipped with image/video
Bier: Let’s start with smartphones
and tablets. As you know, these are now very high-volume products, and
they’re incorporating more and more embedded vision functionality. [Bier
pointed out that in the third quarter alone last year, 169.2 million
units of smartphones were sold worldwide, according to Gartner. During
the same period, 27.8 million units of tablets were shipped worldwide,
according to IDC.] Dual and quad-core CPUs (Cortex-A9-class and
above) are often designed into such tablets, making them beefy embedded
EE Times: What about the embedded vision growth rate of automotive?
Yes, ADAS. This is currently a small market, but it’s going to be
huge in my opinion, due to three factors: 1) Something like 100 million
cars and light trucks are manufactured each year; 2) Car accidents cause
an incredible amount of loss of life, injury and property damage
(they’re the No. 1 cause of accidental childhood death in the U.S.). 3)
ADAS systems can increasingly be made affordable, thanks to improvements
in processors, sensors and other technologies.
Now, it’s still
early adopter days now and manufacturers are charging a premium for
these system, just like they did for airbags, anti-lock brakes, and
stability control in the early days of those technologies.
Today, virtually every car sold in the U.S. has those features. According to IMS Research, vision-based ADAS systems will become a billion-dollar-a-year business in the next two years or so.
While I was at Xilinx we developed the ZYNQ platform with Embedded Vision applications in mind and helped start the Alliance. With thousands of design wins and early successes in ADAS look for some cool products to emerge.
Pleased to see Alliance membership up to 30 companies now with many of them semiconductor and IP companies with interesting product roadmaps targeting vision apps.
CogniVue's (small semiconductor IP company in Canada) thesis in 2010 was that world needs an Image Cognition Processor (ICP) for efficient vision processing like the world need a GPU for 2D/3D graphics processing in the late 1990s. Bring on the ICPs.
Real-time face recognition and gesture recognition as part of next-gen UIs will bog down multi-core processors as the UI is on all the time. The depth map generation is in particular a big challenge. A specialized programmable vision core will be needed. Furthermore - there are no standards for vision - like video codecs - they will be different algorithms used.
Bring on the ICPs.
@freddsd3234343242: "Cheaper and better Imaging IP cores means another blow for the Japanese camera manufactures who already have tough competition from phones."
Not that tough. If you just want to capture a quick picture of something to upload to Facebook, your smartphone may be just the ticket. If you want to do real photography, a smartphone isn't what you use.
Digital cameras still rely on top-quality optics to capture the image. You can do a lot in software with what you capture, but you are ultimately constrained by what you got in the first place.
Smartphones may eat the market for low end cameras because they'll do about as good a job, but that's about as far as they'll go. If I'm Nikon or Canon or the like, I'm not quaking in my boots at this. I'm investigating what incorporating this into the higher end gear I sell might offer my customers.
The fact that imaging and vision algorithms are advancing leaps and bounds every day makes it a good enough reason to use a programmable IP core, I think. By the time you are ready with your new imaging ASIC to specifically tailored to one algorithm, the market may be already seeing the birth of another vision algorithm you may want.
Cheaper and better Imaging IP cores means another blow for the Japanese camera manufactures who already have tough competition from phones. Combined with their failing lithography devision, this could be the last blow for them.
Remember all the crappy mp3 and mp4 players from Taiwan and China? That was due to generic IP cores.
I still don't get why 'we need an imagine core'..
Should Canon or Nikon or Apple abandon their ASIC designs and instead buy one of these IP cores from Tensilica vs. Ceva?
Why?? What is the benefit of using a generic IP for all brands and devices?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.