Automotive and industrial companies have a diverse range of applications they expect to address over time. For example, within a year BMW is expected to roll out some cars that use single-pair, 100 Mbit/s Ethernet using the AVB protocols to connect driver-assistance cameras, replacing existing LVDS links.
However, car makers generally need support for 250 microsecond latencies to use Ethernet in engine control and safety-critical systems. Those apps will require the new protocols now in the works.
Today’s 100M chips are generally sufficient for linking end points in a car. But over the next few years car makers will need Gbit/s links for their backbone nets and future high-res, long-distance cameras.
“In the short term they could use existing Gbit Ethernet components with some difficulty because they are not intended for that nasty automotive EMI environment, so we have to shield the hell out of the cables,” said Teener. “That could be ok if it’s just the backbone with one or two cables per car, but they’d like to move to one single pair of cheap off-the-shelf cables,” he said.
Cabling represents the third heaviest and second most expensive class of components in cars today, he noted.
By contrast, “industrial guys wanted lower latency and Gbit yesterday” pushing toward the single-digit microsecond latency for some apps, said Teener “They are very aggressive about saying they want this now,” he added.
By the way, in case we lose perspective about what these tolerances actually mean, let's not forget that at the speed of light through wire or fiber, it takes an infinitely short timing pulse 5 nanoseconds to travel just 1 meter.
So if you think you have timing tolerances that severe, you're not just going to be hardwiring, you're also going to be compensating for the length of wire.
Two points, assuming existing Ethernet tools available.
One is that if you have a message with very tight latency constraints, you architect your network so it never has to go through a huge number of intervening switches. For instance, in a car situation, you'd never engineer the network to send time-sensitive messages through eight hops.
Another point is that messages with tight timing constraints get to the head of the queue in any switch, by virtue of their high priority. Even if they get to the queue immediately after a full-length 1500-byte message, you're only talking 12 usec of latency. Or in a 10G Ethernet, 1.2 usec.
Finally, if you do have incredibly tight tolerance requirements for a particular message, down in the nsec range, then I would argue that the actual cases of this are going to be few. So you hard-wire the timing pulse, in these few cases, between clock and destinations. And then you send whatever message contains the data separately, over the Ethernet network, to be synced up at the client system with that timing pulse. And you use whatever kalmann or other filtering to determine exactly where you are now.
Although ‘support pre-empting packets' may seem like a good idea, it will cost latency for many current network protocols. Many switches today support cut through operation, but when adding pre-emption cut through switches will need to operate in store and forward modes even for speed reduction cases. This is due to the fact that a large packet could be pre-empted near the end of its transmission due to a time sensitive flow. For example an eight hop 1G network 1518 byte packet cut through latency of 12.288uS goes to a store and forward latency of 101.248uS, nearly a 10x latency increase.
Real time control requires "speed" and "predictability". If responses are reliably received in a defined time frame, a process can be programmed accordingly. The unexpectedly early and especially the late responses are the ones that will create the challenges. If this effort is successful, perhaps the proliferation of microprocessors will evolve into a central processor with a number of peripheral sensors (and less complex programming).
That's also true, IMO. Especially if you have the really tight timing constraints only in short messages, which make up a small fraction of the totalnetwork load. VLANs with high priority should solve the problem. And it's up to the implementer to figure out how best to organize the different priority queues in the switches.
Another point is that the Ethernet 1500-byte payload limit works in your favor as is, if you're concerned with wedging in high priority traffic among large messages at lower priority. Much like the small ATM cells did this, only now we're talking about way faster network speeds. It amounts to the same thing.
Just going faster doesn't always work ... it certainly helps, but both the automotive and industrial networks are pushing data fast enough that over-provisioning is not enough. Over-provisioning *and* careful scheduling *and* engineering the network to avoid the interference caused by long packets can work (and is used today), but both the industrial and automotive industries are moving toward "converged" networks where such assumptions can no longer be made ... hence, the effort to standardize the methods for low-level QoS.
Déjà vu all over again. It must be decades ago now that I saw the first proposals for introducing synchronism into Ethernet. The historical trend has been, forget that, just throw more speed at the problem. If you want to add synchronism to 100M Ethernet, instead of struggling with that, just use 1G Ethernet.
Automobile networks have been very low bitrate in the past. Going to Ethernet, as factories have been doing, provides the opportunity to increase speed by easily a couple of orders of magnitude, without having to invent anything new, and without incurring prohibitively high costs. That's usually enough extra speed that the latencies and jitter will become acceptable. (Yes, I will accept that under-hood temperatures in cars will require MIL-SPEC-like components.)
Plus, the other aspect of this is, clever protocol and application design can also eliminate the perceived need for super tight timing tolerances in the network. For example, if your asynchronous network is way faster than it "needs" to be, for just carrying the data load, then you can implement some small amount of message buffering and still meet the tolerances of your control system. Those message buffers will reduce jitter to the point of being acceptable. And a very fast network can fill the buffers fast quickly to meet the latency requirements.
Back in the days of ATM, it was fun to try to eek out QoS from relatively slow networks. But pragmatism, even back in the 1990s, won out. Cheap speed trumps super clever solutions. My bet is, the same will apply to automotive Ethernet.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.