# Integer Arithmetic Rules!

Instead of using 5V, why don't we work in mV and use 5,000mV in place of 5V? We will always know where the decimal point goes, so we can post-process the result to display the correct answer. The only implication is that the interim variable must be able to hold a large enough number. The initial product would be 6,170,000 (at 20 bits, this would be a *long* data type in your typical C compiler), but the result would be 1,506mV (and, with some additional software, we could round this up).

All of this requires only a little pre-preparation. Remember that the number of bits needed to represent the result of the division is reduced; also that you may be able to preserve memory space by truncating the result to a suitable data type.

If you are still with me, then the next bit (pun unintended) is nothing more than an extension. You must know that, in the real world, the ranges of operation never match the full range of the ADC as a result of offsets and/or design. In the industrial world, a common signal is the 4-20mA loop -- that is, the minimum signal is 4mA and the maximum is 20mA (a range of 16mA). This is frequently read across a 249Ω resistor, so the ADC sees a number corresponding to 0.996V for the minimum and 4.980V for the maximum. In these circumstances you normally calibrate your inputs (see my blog on calibration) using a calibrator to generate a known 4mA (I_{min}) and 20mA (I_{max}). You take the reading at 4mA and save it (preferably in EEPROM) as N_{min}, while the reading at 20mA gives N_{max}.

By proportion on a straight line graph, you can calculate the current using the following:

You can work this out each time you need to know the current, but there are some calculations that are constant for a given set of calibration constants, and you can save these in non-volatile storage at the same time as you calibrate. For example, (I_{max} - I_{min}) will always be 16mA and can be considered to be a constant, as can I_{min}. Similarly, (N_{max} - N_{min}) can be calculated and saved at calibration. I would recommend that you scale the milliamps to microamps, and no floating-point data or math operations are required to find the result.

Sometimes you can be even smarter. Remember that a binary division by 2 is simply a shift to the right by one bit, while a multiplication by 2 is a 1-bit shift to the left. Multiples of two simply require more shift operations (or a single multi-bit shift). Since we control the current calibration by using our own calibrator, what if we were to select I_{max} as 12.192mA and I_{min} as 4.000mA. In this case, the difference would be 8192µA, which is a simple 13-bit shift to the right as a division.

Many times, you will have to process the incoming data and translate to some outgoing signal. As an example, I often have to process a 12-bit 4-20mA input signal to a 10-bit 4-20mA result on the output (with its calibration constants as well). I find the easiest way is to scale the input to a percentage of the full scale and to then work based on the full scale all the way through to the output. Since the percentage is not normally accurate enough, I work with 1,000 "permillage" as it were. If you are thoughtful enough, you may be able to scale this to 1,024 for faster processing.

The bottom line is that careful thought and planning will enable you to write compact programs that execute speedily and provide accurate results without the need for floating-point math. This is, however, at the expense of generality of approach, but "You gotta do what you gotta do!" Are there any tricks ad tips that you use and would care to share? If so, please post them below as comments.

**Related posts:**