IMO USA car companies see electronics as a way to boost profits by adding high-margin features to what is otherwise a low-margin product. What they fail to realize is that as new cars get more expensive, fewer people can afford them. Indeed, young people with student loans to pay back won't be in the market for a car for years or decades after graduation, and have discovered that living and working in an urban area with reasonable public transportation is a better life than car-oriented suburbs. On the few occasions when they need to bring home something heavy, there's car-share.
It's ironic that the Microchip thread is talking about how technology has made ICs so cheap that nobody can make money from them, while cars -- which have been around for well over 100 years -- keep getting more and more expensive. Maybe the IC makers need to convince consumers (or government safety agencies) that all ICs need air bags or something :-)
Hi Kris. The US government requires seat belts, air bags and antilock brakes. They just added a requirement for backup cameras starting in 2018. The governement is very involved in requiring new features in cars for our safety.
I unnderstand seat belts and think that are are useful...ABS should be optional, I don't think it is mandated...are you saying that park assist will be compulsory for all cars??? that makes very little sense to me...Kris
I appreciate that car vendors seek to upsell their customers to the ultimate package of bells and whistles. Audio equipment components in a car may cost 10x what they cost as consumer goods at home. That said, I'm wondering what the electronic component costs are for park assist and to what degree hardware cost reductions make any meaningful difference. Aren't most of the costs of these features the car company's markup and mandatory bundling of other features to sell the total premium package?
@DrQuine. I would agree with you. I, too, would have thought that any of these extra electronics could be easily absorbed by carmakers' markup. And yet, the more chip vendors I talk to, the more emphatically they tell me how car OEMs really like to nickle and dime them. Every penny counts, and therefore, something like BroadR Reach would really appeal to them... so I was told.
I would need to be convinced that even a self-parking system would require uncompressed video. The objects that need to be detected for this self-parking feature, even down to the size of a pebble, can easily be resolved with motion JPEG or MPEG, at data rates that are way lower than what uncompressed video HD would require. I think perhaps the biggest concern in a self-parking systems would be to keep the lag created by compression as low as required. That can be tuned in the algorithm.
As to cabling, I'm actually surprised that automotive Ethernet wouldn't use shielded cat-5e, as opposed to unshielded, although clearly, unshielded is preferable because it's cheaper. All depends on what the EMI requirements are.
For self parking you need computer vision algorithms that get confused with compressed data. Also time lag is an issue (which is partially corrected with Ethernet timestamping).
For park assistance, there is no computer vision algorithms involved. Only video enhancement, fish eye correction and different kinds of videos stitching (2D and/or 3D mapping). Those routines work well with compressed videos.
We don't want to use shielded cables because they are expensive and less robust (temperature, vibration, wear) than twisted pair. That's why BroadR Reach was created: only one pair for bidirectional Ethernet, controled signal rise/fall time for less radiated emission and increased robustness to ESD and induced perturbations. Same wire pair as for CAN bus.
BroadR Reach is new, in the meantime we use LVDS for uncompressed video with shielded cables. APIX is an enhanced LVDS link for high datarates in an automotive environment (with modulated side channels for I2C-like communication).
@Bert, I, too, was surprised to hear carmakers' preference -- as to compressed vs uncompressed. Until this interivew, i've heard two different versions of the story. In fact, last fall, I distinctly remember one ST engineer told me that no carmakers want compressed video.
As you pointed out, it may have something to do with the lag -- the time it takes for compression. Moreover, however, I wonder if that has anything to do with machine vision -- the way certain computer vision algorithms work prefer video data to remain uncpressed.
Moreover, however, I wonder if that has anything to do with machine vision -- the way certain computer vision algorithms work prefer video data to remain uncpressed.
Maybe so, Junko, but I don't understand how that can be the case. Ultimately, if you don't change anything about the vision algorithm at all, once uncompressed, the video can be provided to the vision algorithm via a signal that is identical to what it would have been if never compressed.
Think of a standard TV monitor. No matter whether the signal from your cable company, or over the air, is analog or compressed digital, it can be sent to the TV monitor as uncompressed Y/Pr/Pb, or RGB, or even as uncompressed digital HDMI. So the only hurdle ought to be, as far as I can tell, to make sure the compression process doesn't remove essential detail.
Level of compression is configurable, so that potential problem should not have to exist. And anyway, for parking? Gimme a break.
My gut says that using motion JPEG, which has no interpolated or predictive frames, and therefore no time-lag-inducing "group of pictures" as MPEG would experience, ought to work. Except for the small processing lag, I don't see the major problem. I suppose in a further elaboration, one could develop vision algorithms that use the JPEG data itself, bypassing a JPEG decoding stage. But that should not be assumed as the only way to do this, no?
when you uncompress a mpeg video, you don't have the same pictures as before compression, you have squares of variable size. Those squares have edges and most of the vision algorithms we use are based on edge detection. The edge detection algorithms will make false positives on those artefact edges. That's is the problem.
We can filter-out those false positives, but it costs some more processing time. We can also use a lower compression rate, then we need more bandwidth and we may not be able to use automotive Ethernet anymore. And why pay a MPEG license for a poor compression rate ?
Lossless compression is also a solution, when used with scene optimization (working only on a smaller part of the captured scene).
I'm generally with you, boblespam, that you have to do compression in such a way as to avoid macroblocking artefacts. However I'm saying, that's not something hard to do, for the purpose of parking vision algorithms. You would start by prefiltering the image, before M-JPEG compression (probably would not want to use MPEG, to avoid the time lag), down to something of good SD quality. And then you set up the encoder parameters to give you a good SD quality image compression, having taken away the risk of overwhelming the encoder (which is what causes macroblocking).
Then, when the image has been decoded, the algorithm should not notice macroblocking, any more than human vision does, as long as the encoder is not being over-stressed.
My point being, these are matters of fine-tuning, not something that is absolutely essential. This reminds me a little of the early days of Internet broadband to homes, when some telcos made a big deal of the fact that they were assigning static IP addresses to broadband users. Well, that's fine, but there's nothing intrinsic in broadband that requires the use of static IP vs DHCP assignments. It's just a decision someone might have made, which turned out, in probably all cases, to have been a temporary decision anyway, and not too much should have been made of it.
@Bert, I further checked on this issue with embedded vision expert Jeff Bier.
Here's what he came back with:
- Compression is typically important if you're storing a lot of video, or transmitting it over distances. In ADAS and similar applications, neither of these are typically done.
- Compression might still be useful (e.g., to reduce the bandwidth of data transmitted from a small camera assembly on the windshield). In that case, system designers will have to carefully consider the impact of compression artifacts on their recognition and tracking algorithms. Since recognition and tracking algorithms are complex, they might just decide to avoid compression so they don't have to deal with this issue.
NASA's Orion Flight Software Production Systems Manager Darrel G. Raines joins Planet Analog Editor Steve Taranovich and Embedded.com Editor Max Maxfield to talk about embedded flight software used in Orion Spacecraft, part of NASA's Mars mission. Live radio show and live chat. Get your questions ready.
Brought to you by