# Doing Math in FPGAs, Part 6 (Computing Sine Values)

The task is to compute the sine of a number between 0 and 360 degrees in an FPGA, and then present the result to the outside world in a manner that is useful.

My FPGA is a Spartan 3E on an Opal Kelly XEM3005 board. I am communicating to it via USB, and my connections are virtual busses that are 16 bits wide. So, given 16 bits, I decided that it was fairly easy to represent 0-360 degrees as a 9-bit integer in my FPGA. Also, given that the sine function never exceeds a magnitude of 1.0, I decided to take my output in a fixed point Q,N format of (14,16) -- that is, 1 sign bit, 1 integer bit, and 14 fractional bits:

Now, I want to lose as little significance as possible, so I want a minimum of 14+1 fractional bits in my inner workings. My worst case number will be the 7th-order factor, which includes an x^{7} term. If my maximum input value is ±Π, then &Pi^{7} ≈ 3020.293... To find the number of integer bits, we take the base-2 logarithm of 3020 to find log_{2}(3020) = 12 bits. If we were to stay in a 32-bit representation, then, we would have 1 sign bit, 12 integer bits, and 19 fractional bits.

On my first pass through this, though, I discovered that the truncation errors were more than I cared for, so I extended my registers to 48 bits. This is the iterative part of engineering that I'll only mention here.

On the surface, it seems we could just do the following to convert our input value (in degrees) to a value that could be multiplied by our constant to convert it to radians.

Of course, this doesn't work as well as we'd like... Now, remember the bit I mentioned about boundary conditions and the drawbacks of using only a 7th-order polynomial? Yep, the Taylor Series of sin(x) is only valid from -Π to +Π. This means that the closer we get to ±Π, the worse our error becomes. You can see this in the following graph, where I used a spreadsheet to compute and plot the values of sin(x) as computed by the spreadsheet and as computed by our 7th-order Taylor Series (the red plot is the simulated results of the Verilog code).

I went ahead and "compiled" my code and ran it on my FPGA, and I obtained amazingly similar results as illustrated below.

As we see, the reality looks incredibly similar to the simulation (although a bit more symmetric). So how can we fix this without moving to the 9th-order polynomial? Well, let's take a look at our Sine wave and Taylor approximation again. Here's a picture off my whiteboard:

The first thing to remember is that our sines and cosines describe our distance from the axes based on angular position in a unit circle (lower right drawing). This gives us all sorts of symmetries to work with. In the sine wave on the board, you can see multiple colors. Black is an "ideal" sine wave, and red is our Taylor approximation. The green arrows show how we can take advantage of symmetry in order to stay away from the endpoints:

This leads to the following changes to our module:

...which results in the following simulation:

...which looks pretty darned good. The maximum absolute error is 0.0175, and we can compute the value at any point we want (not just on integer degrees). Compiling the project and throwing the bit file to the software interface gives us similar results as seen below:

All the files are available for download here as a ZIP file (this includes the Excel spreadsheets), and here's another boring video:

As always, please post any questions and comments below.

**Related posts:**