The number and type of sensors are growing at a high exponential rate, but that proliferation will also lead to many false alarms and unforeseen system-oversight implications.
The "check engine" light in my car recently came on, which could indicate anything from a minor issue to the engine being ready to burn up. A quick check with my on-board diagnostics code scanner showed that the light was related in some way to the engine coolant. The driver-console (dashboard) temperature indicator still showed "normal" engine temperature, though, and it turned out that the problem was not with the powertrain itself, but instead, it was a faulty sensor in the coolant system.
To me, the modern super-sensored car is a forerunner of the much-heralded Internet of Things (IoT). Indeed, for many car models, it is an IoT scenario as the car is connected back to the manufacturer to report various readings, status, and events, like it or not. Whether the car is directly connected as an IoT node or not, today's cars are sensor-laden vehicles with self-awareness of many temperature, pressure, flow, and switch-on/-off readings. We're told that this is all good.
But I'm not so sure. As any experienced engineer knows, sensors are the most vulnerable part of a system. Due to their inherent role, sensors are exposed to the nasty real world of moisture, vibration, temperature, and other physical stresses to a lesser or greater extent. Sometimes the exposure is directly due to what is being monitored, but often it is a side effect of monitoring some other parameter. Regardless of the cause, sensors live a much harder life than the electronics on the typical PC board, even if that board is in an automotive environment.
The problem is that as we add IoT sensors to everything, we'll be seeing more false positives and negatives that will be harder and harder to test and assess. Soon, planning for how to test the veracity and credibility of the many readings and alarm indicators will become a larger part of the project.
That's a problem, but only part of it. Assessing sensors and testing their readings is difficult. The options are limited or simply unattractive. You can add redundant sensors and interface circuits, but that adds to cost, weight, power, and space burdens, and you still need a way to decide which of the two sensors is correct. Or perhaps you need to add two extra sensors, then employ the classic "two-out-of-three" voting scheme? Of course, all that additional redundancy also adds, in a perverse way, to the potential for reliability issues along the sensor-related signal chain.
The other option is to implement a true, independent, closed-loop test of the sensor performance. For some situations, this is a very practical choice. For example, you can direct a motor to a specified position or speed and then see if the sensor reading agrees with the directed action. If they do, it's fairly likely that both actuator and sensor are good; if they disagree, something is not right and needs to be checked further. However, this sort of stimulus/response scenario is totally impractical for many sensor variables or settings (such as a temperature or pressure reading), for obvious reasons.
I suspect that the proliferation of IoT-based sensor nodes, or their non-IoT equivalent, will have several unintended consequences. First, there will be a tendency to ignore many of the alarms as alarm overload becomes an issue. While perhaps not a good practice, it is a normal human reaction to a constant stimulus of this type, especially when so many of them turn out to be false.
Second, test engineers will have to spend even more time devising algorithms that correlate multiple sensor readings to see if they can "tease out" a better conclusion as to which readings are correct and which are in error. In applications with multiple sensors, it’s not unusual for one out-of-range reading to be somewhat related to others. For example, overheating (a temperature reading) could be related to insufficient coolant level or flow-rate indications. But these correlations and linkages are not easy to establish and require significant simulation, modeling, and especially system-level understanding, which is increasingly difficult to achieve as systems increase in complexity with subtle interactions and relationships.
Have you had any experience with too much IoT-like sensor information causing excessive false alarms, and subsequent unnecessary shutdowns of the system, or just ignoring of all alarms? Are you concerned that too many sensors can be overwhelming? What about the burden that all of these sensor readings will put on test development?
— Bill Schweber is an electronics engineer who has written three textbooks on electronic communications systems, as well as hundreds of technical articles, opinion columns, and product features.