I still don't get why 'we need an imagine core'..
Should Canon or Nikon or Apple abandon their ASIC designs and instead buy one of these IP cores from Tensilica vs. Ceva?
Why?? What is the benefit of using a generic IP for all brands and devices?
The fact that imaging and vision algorithms are advancing leaps and bounds every day makes it a good enough reason to use a programmable IP core, I think. By the time you are ready with your new imaging ASIC to specifically tailored to one algorithm, the market may be already seeing the birth of another vision algorithm you may want.
Cheaper and better Imaging IP cores means another blow for the Japanese camera manufactures who already have tough competition from phones. Combined with their failing lithography devision, this could be the last blow for them.
Remember all the crappy mp3 and mp4 players from Taiwan and China? That was due to generic IP cores.
@freddsd3234343242: "Cheaper and better Imaging IP cores means another blow for the Japanese camera manufactures who already have tough competition from phones."
Not that tough. If you just want to capture a quick picture of something to upload to Facebook, your smartphone may be just the ticket. If you want to do real photography, a smartphone isn't what you use.
Digital cameras still rely on top-quality optics to capture the image. You can do a lot in software with what you capture, but you are ultimately constrained by what you got in the first place.
Smartphones may eat the market for low end cameras because they'll do about as good a job, but that's about as far as they'll go. If I'm Nikon or Canon or the like, I'm not quaking in my boots at this. I'm investigating what incorporating this into the higher end gear I sell might offer my customers.
CogniVue's (small semiconductor IP company in Canada) thesis in 2010 was that world needs an Image Cognition Processor (ICP) for efficient vision processing like the world need a GPU for 2D/3D graphics processing in the late 1990s. Bring on the ICPs.
Real-time face recognition and gesture recognition as part of next-gen UIs will bog down multi-core processors as the UI is on all the time. The depth map generation is in particular a big challenge. A specialized programmable vision core will be needed. Furthermore - there are no standards for vision - like video codecs - they will be different algorithms used.
Bring on the ICPs.
While I was at Xilinx we developed the ZYNQ platform with Embedded Vision applications in mind and helped start the Alliance. With thousands of design wins and early successes in ADAS look for some cool products to emerge.
Pleased to see Alliance membership up to 30 companies now with many of them semiconductor and IP companies with interesting product roadmaps targeting vision apps.
NASA's Orion Flight Software Production Systems Manager Darrel G. Raines joins Planet Analog Editor Steve Taranovich and Embedded.com Editor Max Maxfield to talk about embedded flight software used on the Mars on EE Times Radio. Live radio show and live chat. Get your questions ready.