I find what machine vision has accomplished over the years astounding. Much of it, of course, has much to do with the advancements of analytic software, but what image sensors are capable of caturing today is also equally amazing.
Now that machine visions are getting gout of a factory floor to a street, I'd say image sensor battle has now well passed the megapixel race.
I think your point about uses being only limited by one's imagination is right on target, Junko.
One application I've wondered about is the supermarket self checkout. Most items you buy have a bar code, so fair enough. But some items, like produce, either have a little sticker-cum-bar code glued on them, or nothing at all. If there is no sticker, the buyer has to search through oodles of strangely organized* pictures on a display, to select the item, before it can be weighed or the quantity entered manually.
Why? We can see it and recognize it, can't we? Why can't the machine do the same?
I think it's revealing to see that license plate and road sign reading software is becoming available. It's an interesting trend. Instead of having to change these legacy systems to make them machine readable, we change the machines to make them capable of reading signs designed for human consumption.
On the megapixels hype, also a good point. The simplest way to describe why is this. The lens focuses the image on the sensor. A cheap or too-small lens often does not have the resolution to make use of a very tight arrangement of pixels in the sensor. Lens blur will not allow much differentiation in the light impacting adjacent pixels, so the beneficial effect of more tightly spaced pixels cannot be exploited. A better lens, and/or a larger image sensor that spreads the pixels apart, is often a more meaningful improvement than just pixel count.
* Are green beans listed under G, or under B? Oddly, sometimes under B (beans, green). Are zucchini listed under z or under s? Oddly, even if they are labeled "squash" on the shelf, they might be listed under z in the checkout machine.
Not only that, but smaller pixels to achieve those higher pixel counts means less signal and lower signal to noise ratio = lower dynamic range. That said, I am really impressed with the pictures I can take with my Samsung Galaxy S4 smartphone! My wife too, and she's always borrowing it because it takes at least as good pictures as her Fujifilm digital SLR, and much better than her low-end Samsung smartphone. I'm not so impressed with the low-light capability; no doubt those smaller pixels and the millimeter-sized lens don't help.
Speaking of optical dynamic range, when is somebody going to come up with liquid crystal sunglasses? Forget the photochromic glasses; they are too slow, they don't get dark enough, and they don't work in the car because they're only sensitive to ultraviolet light, and the window glass blocks UV. I want glasses with a knob I can turn to darken them when I go outside and lighten them back up when I come inside again. Don't forget the human in the loop! Duh!
Photochromic glasses that darken in bright light and lighten inside have been around for decades. No knob required but many years ago, there were glasses that electrically darkened just like welding glasses. Massive failure if I remember.
If there is no sticker, the buyer has to search through oodles of strangely organized* pictures on a display, to select the item, before it can be weighed or the quantity entered manually.
When I used to live in France, they, too, made me go through oodles of strangely organized pictures, so that I can put my produce on a scale, weigh it and put a price sticker, and THEN, i can finally bring it to a cashier. The worst shopping experience ever in the French supermarket. (I can see pictures alright, but if I didn't know the name of produce in French, it took me forever to spot the right picture!)
But at any rate, I had not thought about this in a way you did. (you are smart) Why wouldn't we make a scanner (with an image sensor) that can also actually see and recognize shapes and colors of produce? Certinaly possible, isn't it?
Seven pages of links, but pretty light on content for an article and frankly this article could have been written 15 years ago with much of the same content.
Image sensor cost has come way down (mostly), but performance is not that much better now than it was 10 years ago in absolute terms of what is readily available. Read rates has gone up, but that has little to do with image sensors but more about analog A/D integration.
License plate reading, etc. was being done 20 years ago for IVHS.
FYI, camera on a stick for endoscopy was first done about 20 years ago as well. Power requirements of NMOS sensors not great for the application. CMOS brought power levels down to make this effective. Not sure what is meant by a "digital" image sensor ... integration of A/D?
@Jack,L, thank you for your comment, and your points are well taken.
Indeed, much of the machine vision technologies have been around for a long time. That said, over the years, there have been a good deal of steady advancements on high dynamic range pixels, global shutter CMOS image sensors and fast and accurate column AD converters.
At a time when some of the machine vision technologies already well exploited on factory floors are now getting out on streets with the progress of ADAS, I figured it's time to do a quick reflection on where CMOS image sensors have been.
@Junko, it has already been proven on many TV cop shows that image resolution is irrelevant. It has been replaced by the 'enhance image' button that can extract a perfect image from the grainiest source - usually just in time to catch the perpetrator...
Seriously, there are limits to the technology. I got several traffic tickets from San Francisco toll bridges (I live and spend most of my time in San Diego) because someone up there had a partially-obstructed license plate that got resolved to the number on my wife's car.The tickets were forgiven once a real human took a look at the images. I suspect that a good number of people have expectations driven more by TV shows than by reality.
@Larry, I couldn't agree with you more. It always cracks me up when I watch one of those shows -- where a fuzzy picture becomes suddenly crystal clear once treated by some sort of a machine...what happened to the principle of "garbage in, garbage out"?
It will be intereseting to see CMOS image sensor with image processing capability built into it. This can have direct USB type interface with H.268 and beyond output. With this, user needs to do very little selection and employ this device to resolve thier applications. This will be much more helpful to industrial user.
currently, there are several manufacturers of chip on a stick technology for endoscopy. Sony is not one of them. However of those few, only one has an actual digital imaging system, the others are analog. Recently a company in the US has introduced a high power illumination system and processing board for intense medical and industrial endoscopy using the digital chip on a stick camera.
The focus on the sensor and analytic software is of course important, but the lens systems could use some attention too. Along those lines, I recently read about some researchers who created a method for making low-cost camera lenses with high magnification from the same polymer used to make contact lenses. Attaching such a lens to a smartphone turned it into a high quality microscope.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.