# Integer Arithmetic Rules!

Careful thought and planning will enable you to write compact programs that execute speedily and provide accurate results without the need for floating-point math.

Instead of using 5V, why don't we work in mV and use 5,000mV in place of 5V? We will always know where the decimal point goes, so we can post-process the result to display the correct answer. The only implication is that the interim variable must be able to hold a large enough number. The initial product would be 6,170,000 (at 20 bits, this would be a *long* data type in your typical C compiler), but the result would be 1,506mV (and, with some additional software, we could round this up).

All of this requires only a little pre-preparation. Remember that the number of bits needed to represent the result of the division is reduced; also that you may be able to preserve memory space by truncating the result to a suitable data type.

If you are still with me, then the next bit (pun unintended) is nothing more than an extension. You must know that, in the real world, the ranges of operation never match the full range of the ADC as a result of offsets and/or design. In the industrial world, a common signal is the 4-20mA loop -- that is, the minimum signal is 4mA and the maximum is 20mA (a range of 16mA). This is frequently read across a 249Ω resistor, so the ADC sees a number corresponding to 0.996V for the minimum and 4.980V for the maximum. In these circumstances you normally calibrate your inputs (see my blog on calibration) using a calibrator to generate a known 4mA (I_{min}) and 20mA (I_{max}). You take the reading at 4mA and save it (preferably in EEPROM) as N_{min}, while the reading at 20mA gives N_{max}.

By proportion on a straight line graph, you can calculate the current using the following:

You can work this out each time you need to know the current, but there are some calculations that are constant for a given set of calibration constants, and you can save these in non-volatile storage at the same time as you calibrate. For example, (I_{max} - I_{min}) will always be 16mA and can be considered to be a constant, as can I_{min}. Similarly, (N_{max} - N_{min}) can be calculated and saved at calibration. I would recommend that you scale the milliamps to microamps, and no floating-point data or math operations are required to find the result.

Sometimes you can be even smarter. Remember that a binary division by 2 is simply a shift to the right by one bit, while a multiplication by 2 is a 1-bit shift to the left. Multiples of two simply require more shift operations (or a single multi-bit shift). Since we control the current calibration by using our own calibrator, what if we were to select I_{max} as 12.192mA and I_{min} as 4.000mA. In this case, the difference would be 8192µA, which is a simple 13-bit shift to the right as a division.

Many times, you will have to process the incoming data and translate to some outgoing signal. As an example, I often have to process a 12-bit 4-20mA input signal to a 10-bit 4-20mA result on the output (with its calibration constants as well). I find the easiest way is to scale the input to a percentage of the full scale and to then work based on the full scale all the way through to the output. Since the percentage is not normally accurate enough, I work with 1,000 "permillage" as it were. If you are thoughtful enough, you may be able to scale this to 1,024 for faster processing.

The bottom line is that careful thought and planning will enable you to write compact programs that execute speedily and provide accurate results without the need for floating-point math. This is, however, at the expense of generality of approach, but "You gotta do what you gotta do!" Are there any tricks ad tips that you use and would care to share? If so, please post them below as comments.

**Related posts:**

Author

U. Dreher 3/20/2014 1:41:02 PM

Until it is sent to an output the "physical value" will only con-

sume memory and CPU power (to calculate) but the result does not hold more information than the mere 1234d value from the ADC.

Nowadays the fresh-bred engineers from univerity are calling for floating point units - seemingly not aware that integer arithmetics exist at all. About 15 years ago I was tasked with improvements on an automotive ECU - based on an i80196. 90 % of the software was in assembler (primarily due to memory restrictions) and there was absolutely no FP arithmetics.

I was fiddling with data resolutions of 1/4 rpm and - IIRC - 1/16 degree of camshaft angle -- and that was just fine. The only instance where the "computing numbers" were converted to "real world values" was in the visualizing tools (PC based). And it was THEIR damned job to calculate these values :)

Nowadays there are still some fields of application where you sould resort to integer arithmetics, e.g. EC control units. OK - CPUs with FPUs are readily available. But when it comes to total cost, power consumption etc. integer math is superior. And - BTW - easily more precise than 32 bit floating point.

One more: ever seen fixed-point arithmetics ?

Author

HankWalker 3/20/2014 11:50:30 AM

Author

prabhakar_deosthali 3/20/2014 1:30:27 AM

It is so fast compared to a floating point or the interger arithmetic , that I used it in one of motion control alogirithms . I was working on DEC LSI 11 processor based motion controller and the robotic 3 axis motion control required the interpolation of the robot arms path while it waas negotiating a complex chassi assembly.

The base2 arithemetic came very handy - time and memory effiecient and without much of the rounding errors.

Author

Max The Magnificent 3/19/2014 2:27:43 PM

Author

Max The Magnificent 3/19/2014 2:25:31 PM

the 8048 was 1Kx8. The 8049 was 2Kx8....Did you ever read the biographies of Bill Gates and Paul Allen -- the original Microsoft BASIC fit in 4K and that was with a floating-point library. I hadn't realized quite what an achievement it was until I read the biographies -- now I have a much better appreciation for the mannitude of the task.

Author

antedeluvian 3/19/2014 2:23:56 PM

not seen "Math Toolkit for Real-Time Programming" but the reviews on Amazon make it sound like it would be well-worth reading.Jack Crenshaw, author of the above book is a long time contributor to the magazine Embedded Systems Programming in his column "Programmer's Toolbox". It is a pleasure to read although he often goes way over my head.

Author

antedeluvian 3/19/2014 2:17:51 PM

Back in the day, when the 2 kbyte 8048 microcontroller was the latest and greatest thing on the market,Sorry, my mistake- the 8048 was 1Kx8. The 8049 was 2Kx8. National brought out a 4K version which they called the 8050. The EPROM version had a 28 pin piggy-back socket on the 40 pin DIP and you could install a regular EPROM in it, up to a 2732 (4Kx8)

Author

Max The Magnificent 3/19/2014 2:11:50 PM