@Bert, I further checked on this issue with embedded vision expert Jeff Bier.
Here's what he came back with:
- Compression is typically important if you're storing a lot of video, or transmitting it over distances. In ADAS and similar applications, neither of these are typically done.
- Compression might still be useful (e.g., to reduce the bandwidth of data transmitted from a small camera assembly on the windshield). In that case, system designers will have to carefully consider the impact of compression artifacts on their recognition and tracking algorithms. Since recognition and tracking algorithms are complex, they might just decide to avoid compression so they don't have to deal with this issue.
I'm generally with you, boblespam, that you have to do compression in such a way as to avoid macroblocking artefacts. However I'm saying, that's not something hard to do, for the purpose of parking vision algorithms. You would start by prefiltering the image, before M-JPEG compression (probably would not want to use MPEG, to avoid the time lag), down to something of good SD quality. And then you set up the encoder parameters to give you a good SD quality image compression, having taken away the risk of overwhelming the encoder (which is what causes macroblocking).
Then, when the image has been decoded, the algorithm should not notice macroblocking, any more than human vision does, as long as the encoder is not being over-stressed.
My point being, these are matters of fine-tuning, not something that is absolutely essential. This reminds me a little of the early days of Internet broadband to homes, when some telcos made a big deal of the fact that they were assigning static IP addresses to broadband users. Well, that's fine, but there's nothing intrinsic in broadband that requires the use of static IP vs DHCP assignments. It's just a decision someone might have made, which turned out, in probably all cases, to have been a temporary decision anyway, and not too much should have been made of it.
I unnderstand seat belts and think that are are useful...ABS should be optional, I don't think it is mandated...are you saying that park assist will be compulsory for all cars??? that makes very little sense to me...Kris
when you uncompress a mpeg video, you don't have the same pictures as before compression, you have squares of variable size. Those squares have edges and most of the vision algorithms we use are based on edge detection. The edge detection algorithms will make false positives on those artefact edges. That's is the problem.
We can filter-out those false positives, but it costs some more processing time. We can also use a lower compression rate, then we need more bandwidth and we may not be able to use automotive Ethernet anymore. And why pay a MPEG license for a poor compression rate ?
Lossless compression is also a solution, when used with scene optimization (working only on a smaller part of the captured scene).
Moreover, however, I wonder if that has anything to do with machine vision -- the way certain computer vision algorithms work prefer video data to remain uncpressed.
Maybe so, Junko, but I don't understand how that can be the case. Ultimately, if you don't change anything about the vision algorithm at all, once uncompressed, the video can be provided to the vision algorithm via a signal that is identical to what it would have been if never compressed.
Think of a standard TV monitor. No matter whether the signal from your cable company, or over the air, is analog or compressed digital, it can be sent to the TV monitor as uncompressed Y/Pr/Pb, or RGB, or even as uncompressed digital HDMI. So the only hurdle ought to be, as far as I can tell, to make sure the compression process doesn't remove essential detail.
Level of compression is configurable, so that potential problem should not have to exist. And anyway, for parking? Gimme a break.
My gut says that using motion JPEG, which has no interpolated or predictive frames, and therefore no time-lag-inducing "group of pictures" as MPEG would experience, ought to work. Except for the small processing lag, I don't see the major problem. I suppose in a further elaboration, one could develop vision algorithms that use the JPEG data itself, bypassing a JPEG decoding stage. But that should not be assumed as the only way to do this, no?
@Bert, I, too, was surprised to hear carmakers' preference -- as to compressed vs uncompressed. Until this interivew, i've heard two different versions of the story. In fact, last fall, I distinctly remember one ST engineer told me that no carmakers want compressed video.
As you pointed out, it may have something to do with the lag -- the time it takes for compression. Moreover, however, I wonder if that has anything to do with machine vision -- the way certain computer vision algorithms work prefer video data to remain uncpressed.
@DrQuine. I would agree with you. I, too, would have thought that any of these extra electronics could be easily absorbed by carmakers' markup. And yet, the more chip vendors I talk to, the more emphatically they tell me how car OEMs really like to nickle and dime them. Every penny counts, and therefore, something like BroadR Reach would really appeal to them... so I was told.
NASA's Orion Flight Software Production Systems Manager Darrel G. Raines joins Planet Analog Editor Steve Taranovich and Embedded.com Editor Max Maxfield to talk about embedded flight software used in Orion Spacecraft, part of NASA's Mars mission. Live radio show and live chat. Get your questions ready.
Brought to you by