Excellent article, Junko, which I think explains exactly how Ethernet is being and will be introduced into cars. Just as it has in may other control system environments, in spite of early stage frights by system designers.
I don't think the number of Ethernet switches in the car is a good metric for anything, though. A more important number would be the number of Ethernet hosts, i.e. end systems, connecting into the Ethernet network(s). An Ethernet "switch," which refers to a layer two "multiport bridge," is nothing more than a fan-out device in Ethernet nets. That's all. You do gain in survivability and reliability of you create a relatively dense mesh of switches, interconnected by multiple links, providing for redundant paths that can take over almost instantly if one breaks.
However there is not good correclation between the number of switches and the number of Ethernet end system, is my main point. I prefer many switches with not too many ports in each one. Others tend to go for fewer big switches.
Thanks, Bert. You raise some good points about "Ethernet switches" not being necessarily a good metric.
I think these two gentlemen who were asked questions happen to be in the business of developing automotive Ethernet switch chips -- thus they related their answers more to the potential market for their product.
Excellent point. Multiple small switches might provide better fault tolerant than 1 big interconnected switch. Daisy-chain is probably not a very good idea for sure. Will In-Vehicle-Network open up new opportunity of routing technology?
"safety demands are paramount, a long product development cycle is a given" captures the main roadblock of in-vehicle-network.
Looking at it from the angle, if a safety feature which demands high-bandwidth communication is getting a lot of attention from consumers, auto makers will certainly add it together with the infotainment. Question is what the feature is; HD front camera to capture incident in case of accident. ;)
@chanj, make no mistake. HD front, rear and side cameras featured in a car will be driving this, for safety applications. In fact, the high-bandwidth demand for automotive is real and even more paramount, because autmotive companies don't even want to "compress" the video streams captured by HD cameras!
Junko, when I hear things like "don't want to compress video streams" or "must have isochronous network," I flat don't believe it. Sounds close to knee-jerk position statements, bound to evolve as people design these things.
Uncompressed HD video, just one stream, would require 1.5 Gb/s just for black and white video. That's just one stream. A Gigabit Ethernet network wouldn't even be able to handle a single B&W camera.
Junko & Bert: good points. I do agree it will require much more than 1.5Gbps to broadcast color videos. Bert is also right in bringing out a metric that matters which is the number/redundancy of hosts. At the bare minimum, there needs to 2 or more depending on the location in the vehicle and the corresponding failure rates FITs.
Regarding uncompressed video, most industrial ethernet cables & connectors (M6, M12) use gigabit ethernet which will not be sufficient for color video. But then again, video used in navigation can be of lower resolution to utilize what is available as bandwidth.
Thanks, Docdivakar. Another thing we need to keep in mind is that those HD video streams used in automotive safety apps are, I assume, not for human to watch pretty pictures but for machine vision -- to make a judgement if there is any "danger."
We have had to deal with this issue several times at CogniVue when developing products with our customers. While you are right that the machine vision aspects do not need the high resolution some applications such as Enhanced Backup Camera do still need the "pretty pictures" sent to the infotainment console for display. The result of the vision processing for object detection and distance estimation etc. needs to be overlayed on top of the image displayed.
H.264 is too lossy and suffers from frame latency as @pmundhenk pointed out, MJPEG is still preferred in the Automotive applications. But any amount of compression (ie loss) is problematic if you need to make safety decisions on the camera sensor data. That is why we have our existing CV220X processor family focused on processing close to the sensor and not requiring high bandwidth data transmission across the car chassis. We had to design with the size and power limitations this approach requires.
"H.264 is too lossy and suffers from frame latency as @pmundhenk pointe"
Latency is not an issue for H.264 : our H.264 encoder core has a latency of only 16 video lines (~50 microseconds for 1080p@30) and only because it takes at least 16 video lines to form a 16x16 macroblock, otherwise the real latency for our encoder would be ~3000 clock cycles .
This with P frames and bitrate control. Our core is used in automotive.
I'm not sure how it is currently used. Initially the customer was using it to encode the input of a camera transport it to a decoder and display it on a screen. The total encoder+buffer+decoder latency was much less than a frame because they also transported the clock.
This was a few years ago, the design has now been qualified for automotive and it is in prduction as an ASIC.
In general, if you have high bandwidth, the latency can be reduced and, in any case, at least for our encoder, it would make no difference with a JPEG core (from the latency point of view).
Wether it's good enough after decoding for vision algorithms, I'm not sure. Generally H.264 will compress better than JPEG but our core is Baseline, so it's only YUV 4:2:0 8 bits/sample. We have been asked by automotive customers if the sample precision could be increased to 10 or 12 bits (probably for vision algorithms) but never got enough commitment to actually do it.
If you are happy with luminance only at 8 bits/sample, then it should be better than equivalent JPEG. We have also an I frame only version that uses no external DRAM.
Anyway, basic datasheet of the cores are in the download section of our web page :
Thank you Junko for a nice article shedding a bit of light inside the usually top secret automotive Ethernet domain. Also thanks everyone for the interesting discussion. In fact I registered just now to contribute here.
I am currently building an infotainment system for an academic research vehicle (TUM CREATE EVA, www.tumcreate.edu.sg). We are also including HD cameras (front & back). We stumbled across a few problems when using compressed streams, depending on the stream format. When using H.264 streams, there is always a certain buffer needed to decode the stream. This buffer comes with a certain delay. Especially for real-time parking cameras this is deadly.
I could imagine that this is where car manufacturers are coming from when asking for uncompressed video. Their requirement might be a short delay.
We instead use MJPEG coding, which allows us to reduce the stream delay to a minimum.
All in all: Once understood, it is for sure possible to combine the requirements of the automotive industry with the technologies already in the market, to be able to transfer real-time video while using small bandwidth.
One more comment regarding Ethernet switches, as this also touches on my research: The amount of switches is indeed not a good measure, however, comparing Ethernet to CAN, one might consider a switch overhead for the communication system. Those devices of course consume power and weight, which is all unwanted by the automotive industry.
However, I am fully with Bert in rather using many smaller switches instead of one big switch, especially for in-vehicle networks. One big switch would introduce more and longer heavy cables, while multiple smaller switches introduce latency. One needs to compare these elements and decides for a good structure.
@Chan: Daisy-chains are not necessarily bad, they can actually bring advantages, when networked correctly with the rest of the system. Pure daisy-chains I would avoid though. And I fully agree, there will be new topologies and routing strategies necessary for vehicles! The need is definitely there.
One last comment in my own interest: We will be unveiling our purpose-built electric taxi EVA on the Tokyo Motor Show 2013 in November. Feel free to come by TUM CREATE booth, if any of you are there. I would love to discuss these topics in more detail!
The comment turned out to be a bit longer than intended...
The delays are of course related to the buffer size of the decoder. This depends heavily depend on the reliability of the network, the speed of the decoding machine and many other factors. In our tests these delays varied around 1-2 seconds for strong desktop computers and over 3 seconds for mobile devices. We did not do any more accurate measurements, as this was already defeating the pupose of a park assistant camera for us.
As I run a university program for a semi company, private email me at firstname.lastname@example.org and I may be able to get you some devices etc. if needed for your research.
Having worked in the automotive arm of a major FPGA supplier, I can vouch for how long it takes to get a new technology accepted. 5-10 years or so. I started calling on automtive customers in 1995 and FPGAs were too exepensive and too "new" for automotive. Now they are almost (but not quite) mainstream.
thank you very much for your offer! I forwarded this also to my colleagues and we are evaluating your portfolio. I will let you know if we need something.
Yes, I agree. Though the speed of the automotive industry is picking up, regarding electronics and software, it is overall still a business with a rather traditional mindset. I see this as a chance for new players in the market though. Tesla has shown what can be done, if you tackle the current problems starting from another perspective. I myself haven't sit in one of their cars, but from images, videos, etc, as well as the data sheets, it looks very impressive. I think the automotive industry needs this push to bring their electronics up and beyong the level of current consumer electronics.
Good point about the delay intrioduced by MPEG, pmundhenk. But even without going to motion JPEG, the MPEG delays are adjustable.
The problem is that in order to get the most possible compression, MPEG likes to create as few full interframes as possible. It introduces interpolated and predictive frames between the big interframes, which of course means you need to intyroduce a long buffer between two interframes. The longer the time, the better the coding efficiency. However, this can be tuned down, reducing the delays and reducing the compression efficiency.
I think your approach of going to MJPEG is essentially the same thing as doing away with the predictive and interplotaed frames of MPEG altogether, and it should still result in at least 10:1 compression. Or thereabouts.
Exactly! A carefully adjusted MPEG compression should reach even better results, as higher image quality can be coupled with lower bandwidth. This optimization needs also take into account the end devices that decode the stream. If these have limited processing power, and no additional decoding hardware, information might actually be lost, as the decoding is not fast enough. This is in most cases a minor concern, especially when going to smaller buffers (i.e. less calculation), but one should keep that in mind as well. I think when the overall architecture is well known, one should be able to parametrize MPEG in a way that the delay introduced by the buffer is below 120ms, so not noticeable by humans. This should be a reasonable value for in-vehicle cameras, which are only shown to the user. If these camera images are used for computations, one probably needs to consider an even smaller delay.
And you are right, MJPEG is similar to MPEG without P and I frames. It encodes every frame and can use JPEG decoders which have been heavily optimized for processors. This gives a big advantage, as it does not make sense to replay a delayed frame anyway. Similar to VoIP transmissions.