In our high-tech world we are surrounded by portable applications including mobile phones, PDAs, laptops, medical instruments and measurement equipment. As more and more portable applications become diverse, segmented and personalized, one constant will remain -- they will all be powered by batteries.
Batteries are one of the most misunderstood power sources when it comes to the prediction of remaining system runtime. With an increasing number of portable applications, we will have more critical operations to fulfill, such as mobile phones used for account management, portable data loggers have to remain functional for a full workshift, and medical equipment have to maintain critical data to be monitored without losing integrity.
This article discusses the importance of calculating the most accurate information possible for remaining capacity of a battery. Unfortunately this cannot be done by just measuring some data points or even the battery voltage. Many factors like temperature, the rate of discharge, and cell aging have an influence in the state of charge. The article will focus on a new patented technique to enable designers to predict the state of charge and remaining capacity in Li-Ion cells.
Existing battery capacity monitoring methods
Currently, two methods are used --one is based on current integration and the second based on voltage measurements. The first method relies on the robust idea that if we integrate all battery charge and discharge currents, we will always know how much energy is left. Integrating the current works especially well when the battery is freshly charged, and the battery capacity at full charge is known. This seemingly bullet-proof approach is successfully used in most of today's battery gas-gauges.
However, it has its problems, particularly for usage patterns that have long periods of inactivity. If the battery is charged and left unused for several days or just never fully charged for several charge and discharge cycles, the self-discharge due to internal chemical reactions becomes noticeable. There is no way this self-discharge can be measured, so it has to be corrected for by using a predefined equation. Because different battery models have different self-discharge rates, which also depend on state of charge, temperature and cycling history of batteries, exact modeling of self discharge requires time consuming effort in data collection and still remains quite imprecise.
Another problem of this method is that the value of total capacity is updated only if full discharge happens immediately after full charge. If full discharge events are rare compared with battery life, considerable decrease of actually available capacity can commence before its value will be updated by the gas-gauge. This will result in overestimation of available capacity during these periods. Even if the capacity has been recently updated at a given temperature and discharge rate, the available capacity various with changes in the discharge rate and temperature.
The voltage based method was one of the earliest to be applied because they require only voltage measurement across battery terminals. This method is based on the known correlation between battery voltage and remaining capacity. It seems to be straightforward, but the catch is that the battery voltage correlates in a simple way with capacity only if no load is applied during measurement. When load is applied as it is in most cases when the user is interested in the capacity, battery voltage is distorted by the voltage drop due to internal impedance of the battery. Moreover, even when load is removed, the relaxation processes inside the battery continues to change the voltage for hours. Correction of the voltage drop based on the knowledge of battery impedance is problematic for multiple reasons that will be further discussed.