By the way, in case we lose perspective about what these tolerances actually mean, let's not forget that at the speed of light through wire or fiber, it takes an infinitely short timing pulse 5 nanoseconds to travel just 1 meter.
So if you think you have timing tolerances that severe, you're not just going to be hardwiring, you're also going to be compensating for the length of wire.
Two points, assuming existing Ethernet tools available.
One is that if you have a message with very tight latency constraints, you architect your network so it never has to go through a huge number of intervening switches. For instance, in a car situation, you'd never engineer the network to send time-sensitive messages through eight hops.
Another point is that messages with tight timing constraints get to the head of the queue in any switch, by virtue of their high priority. Even if they get to the queue immediately after a full-length 1500-byte message, you're only talking 12 usec of latency. Or in a 10G Ethernet, 1.2 usec.
Finally, if you do have incredibly tight tolerance requirements for a particular message, down in the nsec range, then I would argue that the actual cases of this are going to be few. So you hard-wire the timing pulse, in these few cases, between clock and destinations. And then you send whatever message contains the data separately, over the Ethernet network, to be synced up at the client system with that timing pulse. And you use whatever kalmann or other filtering to determine exactly where you are now.
Although ‘support pre-empting packets' may seem like a good idea, it will cost latency for many current network protocols. Many switches today support cut through operation, but when adding pre-emption cut through switches will need to operate in store and forward modes even for speed reduction cases. This is due to the fact that a large packet could be pre-empted near the end of its transmission due to a time sensitive flow. For example an eight hop 1G network 1518 byte packet cut through latency of 12.288uS goes to a store and forward latency of 101.248uS, nearly a 10x latency increase.
Real time control requires "speed" and "predictability". If responses are reliably received in a defined time frame, a process can be programmed accordingly. The unexpectedly early and especially the late responses are the ones that will create the challenges. If this effort is successful, perhaps the proliferation of microprocessors will evolve into a central processor with a number of peripheral sensors (and less complex programming).
That's also true, IMO. Especially if you have the really tight timing constraints only in short messages, which make up a small fraction of the totalnetwork load. VLANs with high priority should solve the problem. And it's up to the implementer to figure out how best to organize the different priority queues in the switches.
Another point is that the Ethernet 1500-byte payload limit works in your favor as is, if you're concerned with wedging in high priority traffic among large messages at lower priority. Much like the small ATM cells did this, only now we're talking about way faster network speeds. It amounts to the same thing.
Just going faster doesn't always work ... it certainly helps, but both the automotive and industrial networks are pushing data fast enough that over-provisioning is not enough. Over-provisioning *and* careful scheduling *and* engineering the network to avoid the interference caused by long packets can work (and is used today), but both the industrial and automotive industries are moving toward "converged" networks where such assumptions can no longer be made ... hence, the effort to standardize the methods for low-level QoS.
Déjà vu all over again. It must be decades ago now that I saw the first proposals for introducing synchronism into Ethernet. The historical trend has been, forget that, just throw more speed at the problem. If you want to add synchronism to 100M Ethernet, instead of struggling with that, just use 1G Ethernet.
Automobile networks have been very low bitrate in the past. Going to Ethernet, as factories have been doing, provides the opportunity to increase speed by easily a couple of orders of magnitude, without having to invent anything new, and without incurring prohibitively high costs. That's usually enough extra speed that the latencies and jitter will become acceptable. (Yes, I will accept that under-hood temperatures in cars will require MIL-SPEC-like components.)
Plus, the other aspect of this is, clever protocol and application design can also eliminate the perceived need for super tight timing tolerances in the network. For example, if your asynchronous network is way faster than it "needs" to be, for just carrying the data load, then you can implement some small amount of message buffering and still meet the tolerances of your control system. Those message buffers will reduce jitter to the point of being acceptable. And a very fast network can fill the buffers fast quickly to meet the latency requirements.
Back in the days of ATM, it was fun to try to eek out QoS from relatively slow networks. But pragmatism, even back in the 1990s, won out. Cheap speed trumps super clever solutions. My bet is, the same will apply to automotive Ethernet.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.