Today’s smartphones and tablets are collections of sensors as well as communications devices. But any developer who regularly uses sensors soon notices that the Android platform is not optimized for real-time sensor data acquisition. The challenge in developing an Android application is to effectively combine and use data from a variety of sensors to infer higher-level information about device users and their environment.
It is surprising how many physiological senses a mobile device can mimic. The camera, the most widely used sensor on a phone, allows a device to “see” the outside world, while the microphone lets the device “hear.” Many devices have multiple cameras and microphones for better spatial resolution.
Other enablers of device vision include light and proximity sensors; facilitating the device’s ability to hear are cell tower, Wi-Fi, Bluetooth and GPS wireless sensors.
Advanced mobile devices have even more sensors:
• The magnetometer allows a sense of direction, nominally pointing toward magnetic north.
• The accelerometer and gyroscope provide a sense of balance.
• The pressure and temperature sensors, as well as the touchscreen, build on the sense of touch.
The increased diversity of motion sensors on smartphones encourages developers to go beyond games or augmented-reality apps. For example, pressure sensors enable a quicker GPS fix , while magnetometers assist in cell-tower handoff determination .
Though Android provides a framework for accessing raw data from such sensors at the application level, the existing fragmentation in sensor subsystems and the lack of reliable high-level information make it difficult to create sophisticated sensor applications. What’s more, the differences in sensor availability, capabilities and specifications can make for a support nightmare.
To use Android’s framework to best advantage, hardware designers and application developers need to focus on virtual sensors, instead of raw sensor data; pay attention to the sampling architecture; and improve the platform with optimized hardware and algorithms.
Platforms that interpret sensor data to yield consistent results will improve developer adoption of sensors. Virtual sensors based on meaningful use cases will enable more-relevant apps.
Virtual sensors in Android
Virtual sensors bridge the gap between what can be physically measured and what conceptually matters to developers. They also provide a framework to mitigate differences across hardware platforms and thereby provide a more consistent experience.
For example, an accelerometer measures three-dimensional acceleration of the device in meters per second squared. Such acceleration includes user-generated motion and Earth’s gravitational field in the body frame. Some apps might only need to utilize gravity (for example, to guide a ball through a tilting labyrinth); others might need to utilize user motion (such as for analyzing a golf swing).
Algorithms and at least one independent sensor are needed to disentangle those uses properly from the accelerometer measurement. The functions then can be represented in Android to a user as two distinct virtual sensors: TYPE_GRAVITY and TYPE_LINEAR_ACCELERATION.
Virtual sensors are preferable when accessing sensor events because they offer the most transparent and useful result the system can provide for a particular feature. For example, though Android systems can use SensorManager.getOrientation() to discern the device orientation, that method requires the developer to start and sample the accelerometer and magnetometer.
In addition, SensorManager.getOrientation() uses the simple but slow Triad method to track orientation. It is better for an application to listen for the virtual sensor TYPE_ROTATION_VECTOR, which supplies the device orientation in quaternion form.
This can be directly utilized in OpenGL APIs such as glRotate or can be converted to yaw-pitch-roll using Android-provided methods such as getRotationMatrixFromVector().
Sampling architecture matters
Sensors are by nature high interrupt and low latency. Every time a new measurement is available, the sensor interrupts the core with the information. Any delay in servicing and using the data can disrupt motion tracking or game response.
Since Android is not a real-time operating system (RTOS), some samples can be delayed—leading to incorrect timestamps—or even dropped when the core is busy. Java sensor sampling is easiest to implement but poses large sampling timestamp uncertainties. For example, when the Dalvik garbage collector kicks in, sensor data can be lost for as long as 200 ms.
In addition, sampling nine axes of sensors at 100 Hz can consume more than 50 percent of CPU resources on a 1-GHz processor. Those specs make the approach prohibitive for heavy sensor usage.
Moving to Android native sensor sampling improves performance: It takes less than 10 percent of CPU resources to sample nine axes of sensors at 100 Hz, and the sampling is far more regular .
Furthermore, sampling sensors at the driver level (below the Android Sensor Manager) consumes less than 3 percent of CPU resources, because it can take advantage of true interrupt-based techniques. Applications that do not require instantaneous response can take advantage of first-in/first-outs (FIFOs), available on some sensors, to reduce interrupt overhead.
This is useful for platforms implementing background always-on processing in a low-power mode.
Comparison of 100-Hz sensor sampling in Android using Java (l.) and
native access. Java sampling results in a large variation in the timestamp
difference, sometimes yielding a negative time difference. Native access
is much more controlled, simplifying signal processing.
Click on image to enlarge.
Use-optimized hardware and software
As applications incorporate more-sophisticated uses of sensor data, the fragmentation and sensor acquisition overhead become more apparent. Android systems need two improvements for the next generation of sensor applications:
• Simple hardware to assist real-time data acquisition. To support the broadest application requirements, smart-phone system architecture can have supplemental hardware to assist in real-time sensor sampling. Hardware assistance can be a sensor hub, with dedicated computational resources for the sensors, or just a smart direct-memory-access (DMA) engine to perform the real-time data collection.
This is analogous to what hardware accelerators and graphics processing units (GPUs) do for gaming and video applications. Another option is to distribute critical portions of the high-interrupt computing into the microprocessors of the sensors themselves.
• Improved algorithms. These architecture changes require efficient algorithms. By combining sensor information, algorithms can also ensure seamless, repeatable performance across different sensor manufacturers and can overcome variations in sensor performance.
Even with perfectly designed sensors and core logic hardware, the performance of a sensor platform ultimately rests with intelligent algorithms and heuristics. As an analogy, consider the maturity of hardware acceleration to enable camera display and video processing. App developers, however, still need algorithms to interpret features such as facial recognition or landmark tagging. In the case of the motion-detection sensors in Android, such interpretation takes the form of virtual sensors.
The role of mobile devices is quickly transforming from that of mobile computers to personal companions. Users want their mobile devices to be aware of their situation, anticipate their needs, and understand the contexts of their commands or requests. Reliable information from sensors is key in meeting these heightened consumer expectations.
Intelligent algorithms and optimal hardware designs will guarantee the reliability of virtual-sensor information so that developers have a uniform platform environment. New advances will enable new classes of utility.
Android, as defined today, is merely a start at standardizing a virtual-sensor framework. More-sophisticated algorithms and software will expand this framework so that mobile applications can:
• monitor user activities, such as running vs. walking;
• Notice their environment (indoors in an elevator, for example); and
• automatically react to users’ actions, such as turning off the backlight of the display when users are not looking at the screen.
Giving developers and platform providers access to interpreted sensor data will enable the transition of mobile devices into trusted personal companions.
 “NemeriX partners with Bosch Sensortec to add vertical accuracy to GPS.” GPS Business News, June 2007.
 L.S. Ravindranath, C. Newport, H. Balakrishnan and S. Madden. “Improving Wireless Network Performance Using Sensor Hints.” USENIX Symposium on Networked Systems Design and Implementation. March 2011, Boston.
 Sensor Platforms Inc. “Getting More Reliable Sampling in Android Using Native Methods.”
About the author
Jim Steele is vice president of engineering at Sensor Platforms Inc. He has held senior management positions at Spansion, Polaris Wireless and ArrayComm. Steele co-authored The Android Developer’s Cookbook: Building Applications with the Android SDK. He holds a PhD in theoretical physics from the State University of New York at Stony Brook.