I just received a question from a reader; this query seems so simple on the surface, but it actually leads to a quagmire of confusion.
When most folks use a pocket calculator to perform some mathematical task like division, they rarely pause to ponder what's actually going on "under-the-hood"; instead, they simply accept the result and meander on their merry way. Occasionally, however, I run into someone who is a little more curious. For example, I just received the following email from a reader:
I'm a freelance programmer with a lot of years in high-tech development, but no college education. I decided to go back to school. In an assembly language class one day, I asked how a calculator does division. It seems obvious on the surface that it's a function of repetitive subtraction, but as viewed deeper, that wouldn't work as the numbers got bigger.
The professor didn't have the answer so I went off on the quest to find an answer. I called TI and they couldn't even understand the question, hahahhaha. They kept wanting to know what was wrong with my calculator. So ultimately the only answer we could surmise was it may be a hardware function.
Just a few minutes ago as I pen these words I answered him as follows, but then I thought this may be of interest to others so I decided to make it the topic of today's blog:
Hi Xxxxxx (names have been changed to protect the innocent), with regard to your question "how does a calculator perform division"...
As usual, there's more to this than meets the eye, because it depends what sort of numbers we're dealing with – or, to put this another way, the way in which the numbers are being represented.
Let's start with decimal numbers, and let's assume that we're working with positive and negative integers (... –3, –2, –1, 0, 1, 2, 3, ...) In this case there are two primary techniques. The first – and simplest – would be repeated subtraction, but this becomes time-consuming when the numbers get large. The second would be to perform long division just like we were taught at school.
The same techniques apply to integers represented in binary: repeated subtraction (typically not used) or classical long division. It's the latter that's typically used, but .... there are cunning tricks we can play with binary numbers (as described in my book How Computers Do Math – grin).
Of particular interest is what we do in the case of rounding. Even if we're dealing with integers, if we say divide 14 by 4 we get a quotient (result) of 3 with a remainder of 2. If we were dealing with real numbers this would equate to 3.5, but we're dealing with integers, so we have to do something with the remainder ... either perform some sort of rounding algorithm or simply discard it (where this latter approach may be considered to be a rounding algorithm in its own right).
In school we were taught that rounding a value of say 2.3 to the nearest whole number would result in 2; rounding a value of say 2.8 would result in 3; and rounding a value of 2.5 would also result in 3. this is known as "round-half-up". A slightly more sophisticated algorithm is called "round-half-even" in which x.5 values are rounded to the nearest EVEN number; for example 3.5 would be rounded UP to 4, while 2.5 would be rounded DOWN to 2. This is also known as "Banker's Rounding" because it provides better results over large sequences of random values.
Another question is how we deal with negative numbers. For example, if we divide +14 by +4 we have a result of +3 with a remainder of +2. But what happens if we divide +14 by –3? In this case we know that the quotient will be –4, but should the remainder be +2 or –2? Similarly, if we divide –14 by +3, once again the quotient will be –4, but should the remainder be –2 or +2 in this case?
And then there's a whole bag of worms when it comes to rounding negative x.5 values. Assuming a simple round-half-up algorithm, we know that rounding +2.5 to the nearest integer will result in +3. But what do we mean by "up" in the case of negative numbers. If we take "up" to mean "away from zero", then rounding –2.5 to the nearest integer will result in –3. On the other hand, if we understand "up" to mean "towards positive infinity", then rounding –2.5 will result in –2. (I explore this a lot more in How Computers Do Math and also in an ever-evolving paper on our website at www.DIYCalculator.com.)
But we digress ... lets' return to assuming that we're dealing with decimal numbers. Now, let's assume that we are now dealing with real numbers. For example, 3.2 / 1.6 = 2.0. We could perform this operation using long division as for the integers. Alternatively, we could take the reciprocal of the divisor and then multiply this by the dividend (that is, a / b = a * (1/b)). In this case, for example, we have 3.2 / 1.6 = 3.2 * (1 / 1.6) = 3.2 * 0.625 = 2.0.
Once again, we can do the same thing in binary. The reason we might consider going this roundabout route is that – depending on the application – it can be less computationally expensive to generate the reciprocal and perform a multiplication than to perform a division.
Of course things become more interesting when we start to consider floating-point representations (of which "scientific" representations may be considered to be a subset); for example, 6.23 / 2.9–2.
And, just to throw one more spanner in the works .... your original question was "how does a CALCULATOR perform division" (as opposed to a COMPUTER). Of course a calculator is a special-purpose form of computer, so you may think I'm splitting hairs here, but I'm really not. The point is that when scientists and engineers use computers to perform calculations, their applications typically employ binary floating-point representations, and they are used to dealing with any anomalies that may result from this.
The problem is that it isn't possible to represent many decimal values exactly in binary floating-point; for example, 0.1 (one tenth in decimal) will end up as an infinite sequence of 1's and 0s which will have to be rounded. Now, most users will be upset if they perform a calculation like 0.9 + 0.1 on their pocket calculator and they receive a result like 0.999999999; in this case they absolutely expect to see a result of 1.0.
This means that every pocket calculator on the planet stores and manipulates numbers represented in a form known as Binary Coded Decimal (BCD). Now, most folks are familiar with using BCD to represent integers, but in fact you can also implement floating-point BCD representations, and this is where things really start to get interesting...
... but that's a story for another time ...
Questions? Comments? Feel free to email me – Clive "Max" Maxfield – at firstname.lastname@example.org). And, of course, if you haven't already done so, don't forget to Sign Up for our weekly Programmable Logic DesignLine Newsletter.