Rick, I don't think any application involving humans "in the loop" is a good example of the need for very low latency. The mere fact that the human is in the loop typically creates seconds of slop right there. Or at least 100s of msec of slop. So whether the networks adds a few 100s of usecs or even msecs hardly matters.
Here's another example. When you have to parallel two AC generators, as you do on board ships, or other power generation stations, for instance, you have to close the breakers of the second (or third or whatever) generator at precisely the right time. The generators have to be in phase when they are paralleled. Otherwise, you stand the risk of tearing the generators right off their foundations.
To automate the process, the "close breakers" signal has to be sent within a very tight time window, as the 60 Hz sine waves of the two are in phase. Not something that can be allowed to vary by hundreds of milliseconds.
Right now most of the apps in the different industries start out as FPGA's and then move to ASIC's as volume ramps -- some are moved to ASIC's sooner due to SoftError issues or other performance factors. Each protocol often is custom with only apps like HFT being able to use a standard like UDP with packets over this protocol. Low latency routers and switches also are important - with each industry doing their own.
It's not difficult to imagine that any type of control system, which one might expect to incorporate negative feedback loops, would require low latency if the feedback loops and environmental parameters are provided over digital networks. Such systems can be turned into oscillators, for example, if latency becomes excessive.
Imagine the problem of aiming a gun barrel, in a pitching and rolling ship, or a tank moving over rough terrain. Once out of the barrel, that round is purely ballistic. So the precise attitude of the platform, when the round is fired, must be known to the fire control system. Latency introduces errors. And the more the motion is random, the less clever math filtering is effective at reducing the errors.
Just one example that I think is intuitively obvious.
As we unveil EE Times’ 2015 Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. Panelists Dan Armbrust (investment firm Silicon Catalyst), Andrew Kau (venture capital firm Walden International), and Stan Boland (successful serial entrepreneur, former CEO of Neul, Icera) join in the live debate.