# Doing Math in FPGAs, Part 3 (Floating-Point)

Floating-point numbers are similar to the "scientific notation" we learned in high school, but they are stored and manipulated using binary representations.

It seems I have something of a mini-series of blogs going on here.

First, I muttered some inanities about multiplication and division by 10 (see Doing Math in FPGAs, Part 1). Next, I rambled on about doing math in BCD (see Doing Math in FPGAs, Part 2 (BCD)). Now, it seems it's time to mutter something about floating-point representations of numbers and how to do some math with them. I considered using floating-point representations for this mysterious project that I've been alluding to (I'll get to that, one of these days, maybe), so I took a quick look at how to implement them.

Now, of course, there are plenty of ways one could represent a floating-point number. You can do it your way, I can do it my way, or we can all agree to follow a standard such as the IEEE 754 2008 standard, for example. Of course, I'm not the first person here on EE Times to cover the topic of floating-point representations; in fact, Mr. Kjodavix described this way back in 2006 (see Tutorial: Floating-point arithmetic on FPGAs). Because of Mr. Kjodavix's article, I wondered whether I should even bother expounding on floating-point concepts. However, we all *speak* a little differently and we all *learn* a little differently, so maybe my take on this will make someone else's grasp a little better (I *do* recommend reading Mr. Kjodavix's article, though).

So what are floating-point numbers? Well, let's start with the fact that, due to the way in which we build our computers using two-state logic (let's not worry about experiments with tertiary, or three-state, logic), we have to store numbers using some form of binary representation. It's relatively easy to use binary values to represent integers, but they don't lent themselves to directly storing *real numbers*; that is, numbers that include fractional values with digits after the decimal point. In other words, it's relatively easy to use binary to represent a value like 3, but it's less easy to represent a value like 3.141592. Similarly, it's relatively easy to create logic functions to implement mathematical operations on integer values, but it's less easy to work with real numbers.

Of course, we can store numbers in *BCD* (I talked about this in my previous blog), or we could use *fixed-point* representations (I will talk about this next time), but what do we actually mean by *floating-point*? Well, it's a lot like the "scientific notation" we learned at high school (e.g. 31.41592x10 ^{-1}), but it's stored and manipulated using binary representations.

So, how we might perform the mighty feat of representing a *real number* in binary? If we would just assume a binimal point (the binimal point is the same as the decimal point in base 10, only it's the binary equivalent in base 2) at some fixed point in the middle, then we'd have a fixed-point representation as illustrated below:

I won't yammer on about this right now (that's for next time); suffice it to say that we would need a lot of bits to represent either a really big number or a really small one. Floating-point solves this problem by breaking the number up into three pieces: the *sign*, the *mantissa* (a.k.a. *significand* or *coefficient*), and the *exponent* (a.k.a. *characteristic* or *scale*). This gives us a fairly large dynamic range. The generic form is as follows:

Where:

- n = the number being represented
- ± = the sign of the number
- x = the mantissa of the number
- b = the number system base (10 in decimal; 2 in binary)
- y = the exponent (power) of the number (which can itself be positive, or negative)

Easy, right? Well, maybe not so -- there are some tricks involved, as well as a variety of benefits and drawbacks. So, how do we represent floating-point in our device? Well, there's plenty of different ways to do this, there's your way, there's my way, and there's some other guy's way.

For example, the exponent is usually an integer. We could extend this by allowing the exponent to have a fractional representation if we really wanted. In general, though, I don't know why we'd want to do that, as the result would just be another fractional number that we could easily represent (unless the exponent and the mantissa were both negative, in which case we'd have a complex number, and there are easier ways to represent those).