Breaking News
News & Analysis

Imaging Revolution: Forget Frames

2/15/2016 10:05 AM EST
7 comments
NO RATINGS
5 saves
Page 1 / 3 Next >
More Related Links
View Comments: Newest First | Oldest First | Threaded View
tb100
User Rank
Author
compression
tb100   2/16/2016 5:47:43 PM
NO RATINGS
Image compression is used by all video streaming services and devices, including Netflix, Roku, Comcast, and cell phones. Many of the devices have DSP units to handle the compression algorithms, which includes detecting changes from one frame to the other and only encoding changes.

They talked about needing a DSP function to interface their device to present electronics. It seems we already have this. They just need a specialized one that can convert their non-frame output directly into a compressed image frame stream. Since they are already doing some of this compression indirectly, it should be a lower powered DSP task than what we are doing now.

Chronocam1
User Rank
Rookie
Re: It's true they did not invent it...
Chronocam1   2/16/2016 6:51:37 AM
NO RATINGS
Depends what you mean by "it".

Otherwise, what you state is correct. And we never want to hide our (common) roots - which is anyway impossible as all is public. Just need to look at our publication history with many papers co-authored with Tobi Delbruck and his group. In particular, there is a recent review paper out that contains a more or less comprehensive history of bio-inspired, event-based vision sensors going all the way back to their origin at Carver Mead's Caltech lab in ther late 80's.
It's all there: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6887319

Also, Zuerich is mentioned in The EEtimes article as one place of origin of the technology.


Finally, there is a slight contradiction in this sentence: "Chronocam's pixel is *completely* based on it, but Posch *added a second part* to the pixel ...". Based on this (and other) additions, also in the way data from such a sensor are processed, Chronocam aims at bringing bio-inspired, event-based vision technology to the computer vision market.



Contact: contact@chronocam.com

junko.yoshida
User Rank
Author
Re: It's true they did not invent it...
junko.yoshida   2/16/2016 5:06:37 AM
NO RATINGS
Thanks for listing additional links, raphael. Considering a sizable neuromorphic engineering community that's been in existence for a long time, it is certainly natural there are many others who have also pursued the similar technology.

My apologies for not mentioning the predecessors' work, but this piece wasn't intended to be a comprehensive history. Once again,  thanks for chiming in.

 

junko.yoshida
User Rank
Author
Re: I just knew...
junko.yoshida   2/16/2016 3:55:06 AM
NO RATINGS
Ha ha. The thought had not occurred to me until you mentioned it, but true. The two co-founders are certainly a dynamic duo, who apparently know each other so well. It was a fun interview for me. They are smart, passionate and creative in their discussions on their technology and mission.

raphael@insightness.com
User Rank
Rookie
It's true they did not invent it...
raphael@insightness.com   2/16/2016 2:21:34 AM
NO RATINGS
... because their work is based on the work by the group of Tobi Delbruck at the Institute of Neuroinformatics in Zurich (http://sensors.ini.uzh.ch/). Pretty bad that this is not mentioned at all in the article...

An early article from 2006 about the sensor by Lichtsteiner and Delbruck is even called "Freeing vision from frames":

http://siliconretina.ini.uzh.ch/wiki/lib/exe/fetch.php?media=delbrucknmefreeingvisionfromframes2006.pdf

The JSSC article describing the sensor in more detail is from 2008, and Chronocams pixel is completely based on it, but Posch added a second part to the pixel to measure the intensity.

There are also spin-offs in Zurich that either sell sensors (http://inilabs.com/) or commercialize the technology with concrete applications (www.insightness.com and www.inivation.com).

 

 

realjjj
User Rank
CEO
..
realjjj   2/15/2016 2:52:18 PM
NO RATINGS
So lets call it , a pixel level "event driven shutter" as opposed to rolling or global. Anyway,if the device is always in motion and every pixels needs to sample continuously , what's the upside vs traditional with no color filters , power wise?  Sure static pixels would sleep but the highest volume segments are devices that move. Maybe if the pixels would communicate with each other and be aware of a few layers of pixels around them. Exposure time , if it can be called that, being variable seems great but they would need to store that too and not sure how they deal with that, guess they just send the data in real time. Is no color a permanent choice or just for now? Pixel level event driven shutter/variable exposure would be great for any cam, if they can shrink the pixels enough.  Maybe night vision in glasses is a market , why not have that feature but i guess their play is very high sampling rates not low power and then addressing 3D mapping, gesture recognition and detecting the device's movement in glasses would fit better. Maybe it's notable that InVisage has global shutter, matters since, in the end, very high sampling rates are not needed that often. Guess , in a way, the resolution is variable too, as they would need to put pixels to sleep when they hit the bandwidth/power/thermal wall- if they could do that. In glasses pixels aware of the distance to a moving object would help a lot too.This keeps reminding me of GPUs with variable refresh rates , variable quality, variable resolution , not that GPUs are there yet. They sure seem to have a lot of work on their hands.

Interesting but they need to be realistic about the advantages and disadvantages of their technology and try to find practical solutions, feels like it would be easy to get carried away and always aim for purity instead of practicality. Fluid sampling rates, exposure, image quality, resolution sounds awesome if you can do it and manage it right.

CC VanDorne
User Rank
Author
I just knew...
CC VanDorne   2/15/2016 1:37:52 PM
NO RATINGS
...Simon and Garfunkel would get back together.

Sorry, Junko, I just coulnd't help myself.  This is a great report and some really forward looking stuff, but I had the dangdest time holding back my chuckles after seeing their photo.  The resemblance is just uncanny, no?

 

 

Most Recent Comments
michigan0
 
SteveHarris0
 
realjjj
 
SteveHarris0
 
SteveHarris0
 
VicVat
 
Les_Slater
 
SSDWEM
 
witeken
Most Recent Messages
9/25/2016
4:48:30 PM
michigan0 Sang Kim First, 28nm bulk is in volume manufacturing for several years by the major semiconductor companies but not 28nm FDSOI today yet. Why not? Simply because unlike 28nm bulk the LDD(Lightly Doped Drain) to minimize hot carrier generation can't be implemented in 28nm FDSOI. Furthermore, hot carrier reliability becomes worse with scaling, That is the major reason why 28nm FDSOI is not manufacturable today and will not be. Second, how can you suppress the leakage currents from such ultra short 7nm due to the short channel effects? How thin SOI thickness is required to prevent punch-through of un-dopped 7nm FDSOI? Possibly less than 4nm. Depositing such an ultra thin film less then 4nm filum uniformly and reliably over 12" wafers at the manufacturing line is extremely difficult or not even manufacturable. If not manufacturable, the 7nm FDSOI debate is over!Third, what happens when hot carriers are generated near the drain at normal operation of 7nm FDSOI? Electrons go to the positively biased drain with no harm but where the holes to go? The holes can't go to the substrate because of the thin BOX layer. Some holes may become trapped at the BOX layer causing Vt shift. However, the vast majority of holes drift through the the un-dopped SOI channel toward the N+Source,...

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed