IoT designers need to learn how to integrate entire databases of “perceptual information” from data-rich sensors into future products.
Smart sensors, widely distributed throughout our network of devices, are becoming the eyes and ears of the Internet of Things: Connected devices that measure monitor, transmit, and control vast amounts of vital data, wirelessly, in an always-on mode.
With so many eyes, and ears, and with so much constantly updated system data to work with, tomorrow’s designers will need to learn how to integrate entire databases of “perceptual information” from data-rich sensors into future products. This vast profusion of sensor data by itself presents formidable technical and engineering challenges.
So designers will be happy to know that an A-team of researchers, Grenoble-based authors James L Crowley and Yves Demazeau, recently published an authoritative, forward-looking paper that addresses all of these issues and topics: Principles and Techniques for Sensor Data Fusion.
The paper presents perception "as a process of dynamically maintaining a model of the local external environment," noting that sensor fusion and the "fusion of perceptual information is at the heart of this process."
A word about this column: If you're like most engineers, you are forever on the lookout for fresh ideas and smart solutions. You have or are building your library of technical papers, documents, data sheets, products, technologies, standards, and applications for future and handy reference.
My goal here, and in future columns, is to explore some of the day’s greatest IoT design challenges—such as low power micro-computing, signal conditioning, wireless communications, sensors, actuator control, more efficient power sources and the man-machine interface—and pass along some useful resources. I hope you will find them valuable enough to archive or share.
Most important: It would be great if we could collaborate. Send me relevant knowledge resources that you may have found so that we can compile them for easy access.
So what, exactly, is sensor fusion?
According to the comprehensive overview article Smarter Sensors Save Space and Power, sensor fusion combines the outputs of multiple sensors in a system to monitor complex or rapid movements accurately for purposes such as gesture controls or body-motion capture for gaming or research purposes.
Depending on the application, sensor fusion may be best performed in the main processor, or an external sensor hub, or in the sensor itself. Factors such as power consumption, size constraints, battery lifetime and processing resources have the most important influences on the decision.
The article describes new approaches to connecting sensors and new techniques — embodied in the latest semiconductor products.
--Richard Wallace is a former editor in chief of EE Times. He has followed and reported on electronics, technology and design for 40 years, most recently as an independent journalist, online.