# Doing Math in FPGAs, Part 5 (Binary Division)

I'm afraid to say that I lied to you in my previous column.

I said that column would be the last of these rantings about binary math. But then I realized that I had no good references on performing binary division (especially in fixed-point), so I decided to dedicate this installment to binary division. I apologize in advance for the long-windedness of my pontifications. Fasten your seat belts!

Let's first refresh ourselves on the "long division" that we learned in grade school. Let's say we want to find how many threes are in 136 (136 ÷ 3) -- I don't know why we'd ever want to do this, but I'm sure there's a good application for it somewhere. First, we set up our long division table as illustrated below:

Starting from the left (the higher-order digits), and working towards the right, we test how many times we can fit our divisor into our dividend. So, left-aligning the two numbers, we see that 3 will go into 1 zero times (or, more correctly, 300 will go into 100 zero times):

Next, we multiply the zero and our divisor, and put the properly-aligned result into the table beneath the dividend:

Now we subtract the results of the multiplication and place the result of the subtraction in the table -- perhaps carrying down only the next digit in significance:

Using the next-lower-ordered bit, we see that 3 will go into 13 four times (or, more correctly, 30 will go into 130 four times):

We repeat this subtraction process until we've progressed through all of the digits:

Eventually, we discover that our result is 45 with a remainder or 1 (or 45 and 1/3). Of course, we can keep the division going into those decimal places and generate a result or 45.33333333r (where the "r" indicated that the preceding "3" will keep on recurring).

Of course, this is pretty easy for those of us that have been doing it for a while (my kids still struggle with it some), but how is this process different from binary? Well, it's not, really, but there are a lot of "gotchas" when it comes to performing division using our computing machinery. Let's quickly use the same numbers to perform a binary division as illustrated below:

Simple enough, if you give it some thought. But, there are a couple of "interesting" things here. The first thing, if you notice, is that we are shifting the divisor to the right (÷2) each step. Since this is binary, this makes perfect sense. Next, since we can only multiply by 1, all we ever need to do is set the appropriate quotient bit at the appropriate time. Finally, as in the original base-10 case, if the dividend is greater than the divisor, then we subtract the divisor from the dividend. So, we have the basic steps for performing binary division:

- Bit-align the left-most "1" of the divisor with the left-most "1" of the dividend
- Set all quotient bits to "0"
- If the dividend ≥ divisor
- Set the quotient bit that corresponds to the position of the divisor to "1"
- Subtract the divisor from the dividend

- Shift the divisor 1 bit to the right (@divide;2)
- If the leading bit of the divisor is at a position ≥ 0, return to step 3
- Otherwise we're done

But how do we implement this in our hardware? Right off the bat, we can see some problems. Firstly (and what I see as the "hardest") is this -- how do we align the left-most bits of the two numbers? It seems easy enough to say it, but do we use a left-shift and compare? Do we use a giant multiplexer? The first method takes clock ticks, while the second takes up lots of hardware.

Next, how do we keep track of the right number of zeroes at the front -- that is, how do we keep the bit position of the quotient correct, and properly aligned? It's not impossible, of course, but things to think about. Lastly, what do we do with a remainder? Do we just throw it away, or do we round?