We can perform subtraction in a couple of ways. The first (and maybe most obvious) is to take the two's complement of the number to be subtracted and then perform an add. This is similar to my BCD examples in a previous post. Of course, you do need to detect the sign of the answer or know it a priori, so you can convert it back to sign-magnitude notation, which is really what my version of fixed-point is. In my case, I'm letting the synthesizer handle all that. I'm detecting beforehand in my add function the sign and magnitude of each addend, and I'm performing ordered manipulations to ensure that no two's complement results occur.
Now, let's talk multiplication for a bit (no pun intended). Let's multiply two numbers: X and Y. If X is N bits in size, and Y is P bits in size, then we'll need a field N+P bits wide to store the result. I'm sure there's a proof of this somewhere, but for our purposes, we'll just take it on faith, OK?
This could be problematic. In order to do this correctly, we would need to ensure that we properly align our binimal points and then figure out the new binimal point location in the resulting register. Our solution simplifies this greatly by assuming that all inputs to a given operation are the same length and have the same number of fractional bits. If we can ensure that each input has N bits, then the output will have 2N bits. If each input has Q fractional bits, then the output will have 2Q fractional bits. This is exciting stuff, because it makes life easy for us.
This is especially handy because we generally need to hand back a result that matches the format as the one we gave for input. When designing our system, we need to understand the largest value we'll ever expect to see. This will define how many bits we need in the N field/register. Understanding this, we can easily determine the following relationship.
Neat, huh? Now, let's talk about sign bits. The results of the multiplication are independent of the signs of the numbers being multiplied, except for the sign of the answer. The absolute value of 5 x 5 = -5 x 5 = 5 x -5 = -5 x -5, and the same goes for division. Thus, the only thing we need to check is if we multiplied (or divided) a negative and a positive. This is the only case that gives a negative result. This can be easily determined by performing an XOR of the sign bits.
This is a good time to talk about binary multiplication and division. To date, I've kind of glossed over this function by waving my hands and saying, "It's just repeated addition/subtraction." Well, that's true enough, but imagine if we had 1 million divided by 1. We're well versed in our arithmetic principles, so we know right off the bat that the answer is 1 million. However, our computing machinery only knows simple logic, so it will merrily perform 1 million and one adds to find the answer. That will take a lot of time.
Fortunately, there are implementable solutions in our logic to peform the multiplication in a single clock cycle. That makes life easy for the multiplication (at least, with modern hardware and synthesizers). This being the case, I'll wave my hands and gloss over it, but you can search the internet for "Multiplication by shift and add" if you wish to learn more.
I will assert that I have seen no good descriptions of fixed-point division (or, really, even binary division), so my next column will be devoted purely to division. I only hope I can convey the ideas clearly. In the meantime, if you would like to take a look at my code and use it for your own purposes, I have made it available here.
Actually, this reminds me of a conversation I had a long time ago when I was working for a defense contractor. I said something to the effect of "If world peace ever broke out, we'd be out of a job." My older (and wiser) coworker responded, "Naaah, there's a lot of money to be made in verification." The moral of this story: Trust but verify. I built my library on someone else's library. If you use the functions in my library, be sure to test them out thoroughly to make sure they do what you intend.