# Doing Math in FPGAs, Part 6 (Computing Sine Values)

The task is to compute the sine of a number between 0 and 360 degrees in an FPGA, and then present the result to the outside world in a manner that is useful.

My FPGA is a Spartan 3E on an Opal Kelly XEM3005 board. I am communicating to it via USB, and my connections are virtual busses that are 16 bits wide. So, given 16 bits, I decided that it was fairly easy to represent 0-360 degrees as a 9-bit integer in my FPGA. Also, given that the sine function never exceeds a magnitude of 1.0, I decided to take my output in a fixed point Q,N format of (14,16) -- that is, 1 sign bit, 1 integer bit, and 14 fractional bits:

Now, I want to lose as little significance as possible, so I want a minimum of 14+1 fractional bits in my inner workings. My worst case number will be the 7th-order factor, which includes an x^{7} term. If my maximum input value is ±Π, then &Pi^{7} ≈ 3020.293... To find the number of integer bits, we take the base-2 logarithm of 3020 to find log_{2}(3020) = 12 bits. If we were to stay in a 32-bit representation, then, we would have 1 sign bit, 12 integer bits, and 19 fractional bits.

On my first pass through this, though, I discovered that the truncation errors were more than I cared for, so I extended my registers to 48 bits. This is the iterative part of engineering that I'll only mention here.

On the surface, it seems we could just do the following to convert our input value (in degrees) to a value that could be multiplied by our constant to convert it to radians.

Of course, this doesn't work as well as we'd like... Now, remember the bit I mentioned about boundary conditions and the drawbacks of using only a 7th-order polynomial? Yep, the Taylor Series of sin(x) is only valid from -Π to +Π. This means that the closer we get to ±Π, the worse our error becomes. You can see this in the following graph, where I used a spreadsheet to compute and plot the values of sin(x) as computed by the spreadsheet and as computed by our 7th-order Taylor Series (the red plot is the simulated results of the Verilog code).

I went ahead and "compiled" my code and ran it on my FPGA, and I obtained amazingly similar results as illustrated below.

As we see, the reality looks incredibly similar to the simulation (although a bit more symmetric). So how can we fix this without moving to the 9th-order polynomial? Well, let's take a look at our Sine wave and Taylor approximation again. Here's a picture off my whiteboard:

The first thing to remember is that our sines and cosines describe our distance from the axes based on angular position in a unit circle (lower right drawing). This gives us all sorts of symmetries to work with. In the sine wave on the board, you can see multiple colors. Black is an "ideal" sine wave, and red is our Taylor approximation. The green arrows show how we can take advantage of symmetry in order to stay away from the endpoints:

This leads to the following changes to our module:

...which results in the following simulation:

...which looks pretty darned good. The maximum absolute error is 0.0175, and we can compute the value at any point we want (not just on integer degrees). Compiling the project and throwing the bit file to the software interface gives us similar results as seen below:

All the files are available for download here as a ZIP file (this includes the Excel spreadsheets), and here's another boring video:

As always, please post any questions and comments below.

**Related posts:**

Rookie

shamsi_ismailov 8/17/2017 4:18:57 PM

Thank you for the usefull post.

Is it possible to share a verilog source code?

Sincerely,

Shamsi

Author

Kevin Neilson 3/16/2014 1:18:50 AM

So if you want to know the arctan between LUT values N and N+1, you take the N value from the first LUT, add 1/2 * the Nth value in the second LUT, and (1/2)^2 / 2! * the Nth value in the 3rd LUT. (Not taking into account Horner's Method.) There are probably a lot of other techniques for arctan that smart people have come up with.

If you're doing arctan on a complex IQ value in order to find its phase, for demodulation purposes, you can take the absolute value to map it into the first quadrant, and then use one 2-dimensional lookup table to find phase angle and another for magnitude. (There are also simple algebraic functions which approximate magnitude pretty well. That might be another interesting topic for the next article in this series.) By 2-dimensional, I mean that half of the address bits are used for I and half for Q and the output of the LUT gives you arctan(Q/I).

-Kevin Neilson

Author

betajet 3/15/2014 7:57:12 PM

Do you have a suggestion for arctan?

Author

TanjB 3/15/2014 7:38:22 PM

Given the small word length of your inputs you can afford to run tests which exhaustively check every value for glitches like that.

Author

Kevin Neilson 3/14/2014 12:02:59 AM

CORDIC is OK if you want a lot of precision and you have a lot of time to blow. It's not as valuable as it was in the pre-DSP48 days.

-Kevin Neilson

Author

tom-ii 3/13/2014 6:10:43 PM

Thanks!

Author

MikePDX 3/13/2014 6:09:09 PM

http://www.andraka.com/cordic.htm

Very clearly explained, you'll understand CORDIC after reading Ray.

Author

tom-ii 3/13/2014 5:43:54 PM

That's the 1st I've heard of "BAM." Thanks for the info!

As for CORDIC, I've still not quite wrapped my head all the way around it, yet, but I see it takes some pre-computed lookup tables for it to work. Which is okay, I suppose, but the purpose of this set of exercises was to get some decent fixed-point libraries put together.

The other thing about this silly project was that it got the computation done in a single (albeit slow) clock tick.

FWIW, and as you point out, YMMV ;)

Author

betajet 3/13/2014 5:19:28 PM

arctan. CORDIC only uses shifts and ADD/SUB. I'd also stick to fixed-point arithmetic, since the values of sin and cos only go between -1 and +1.As far as representing angles, I like Binary Angular Measurement (BAM). You represent an angle as a fixed-point binary fraction of a circle. You use every bit of your number representation, and BAM automatically calculates angles modulo 360 degrees due to the modulo 2^N nature of binary arithmetic.

BAM angles have the nifty property that unsigned and two's complement arithmetic are equivalent. You can think of angles as being from 0 to just less than 360 degrees using unsigned arithmetic, or from -180 to just less than 180 degrees using 2's complement arithmetic.

JMO/YMMV