# Doing Math in FPGAs, Part 6 (Computing Sine Values)

The task is to compute the sine of a number between 0 and 360 degrees in an FPGA, and then present the result to the outside world in a manner that is useful.

It might seem, perhaps, that I am a habitual liar. Twice, now, I've told you I was done with my ramblings about math-related things. Yet here I am, getting ready to ramble on about math-related things. This is surely the last time, I promise!

So, I had a discussion with Max Maxfield some time ago, and it went along the lines of: "I'm doing this silly project..." And Max said something to the effect of: "Wow, that *is* silly! But it would be great fodder for a blog wherein you ramble aimlessly about engineering decisions on how and why you did things the way you did!"

Well... here we are. I've mentioned this project in my past blogs on math, so here's what I was doing. The project was to compute the sine of a number between 0 and 360 degrees in my FPGA, and then present it to the outside world in a manner that was useful.

This is all well and good, but how in the heck do we compute the sine of a number using digital logic? This could lead to a great treatise on polynomial expansions of functions; but while I am greatly expanding, these days, I am not party to any treaties (that I know of). So, instead, I'll gloss over all that, and instead just mention Mr. Taylor. I think I'll go with a Taylor expansion, rather than using a CORDIC algorithm and look-up table (although, I may revisit this later, as one of my upcoming projects will likely need look-up tables). There's also a very interesting paper on transcendental functions in Verilog, but this is primarily for use in test benches. I've put it into my hip pocket for the future -- maybe it will prove useful at some later date.

Before we go any further, it is absolutely imperative to understand that all of these methods are approximations only. It's also important to understand the limitations of your "simulation," the boundary conditions, and what can go wrong if you aren't prepared for the differences from "reality." With that thought, I took a quick look at Wolfram's site, and it looks to me like a 7th-order polynomial will be the minimum that's sufficient to represent the sine function over the full range (note the output is only good between -Π and +Π). In fact, a 9th-order polynomial would be a bit better -- we'll see the effects of using only a 7th-order later in this blog.

So, you've forgotten what a Taylor expansion is? See the image below, and you'll see why I thought this was an excellent exercise for learning the use of the math functions in my library. Recall, though, that I wanted to avoid the use of signaling and multiple clocks so I could just focus on gluing everything together.

Unfortunately, the Taylor expansion for the sine function requires a lot of dividing, and we've already figured out that division is kind of hard on FPGAs. So how do we work our way around this? Well, in our sine generator we *could* solve the function using division, but that would take multiple clock ticks and signaling to know when the division is complete. Instead, we'll pre-compute the coefficients and perform multiplications. This is because my FPGA (a Xilinx Spartan 3E) can support fast multiplication using combinatorial logic.

Of course, we'll pay for this in terms of resource utilization and likely speed of ticks (there'll be several layers of combinational logic), but then... I dunno; maybe, for some applications, it's worth it? And, of course, we could also pipeline the whole thing, but for now I think I'll stick with one giant "combinational cloud."

Now, I realize this will make a humongous (and slow) cloud, but what I was wanting to do (for now) is to just understand the math without other distractions from registering, pipelining, etc... So for now, please ignore the white elephant standing on the man behind the green curtain, and let's not get distracted about how silly this particular implementation might be (ZOMG, 298 levels of combinational logic!), okay?

Rookie

shamsi_ismailov 8/17/2017 4:18:57 PM

Thank you for the usefull post.

Is it possible to share a verilog source code?

Sincerely,

Shamsi

Author

Kevin Neilson 3/16/2014 1:18:50 AM

So if you want to know the arctan between LUT values N and N+1, you take the N value from the first LUT, add 1/2 * the Nth value in the second LUT, and (1/2)^2 / 2! * the Nth value in the 3rd LUT. (Not taking into account Horner's Method.) There are probably a lot of other techniques for arctan that smart people have come up with.

If you're doing arctan on a complex IQ value in order to find its phase, for demodulation purposes, you can take the absolute value to map it into the first quadrant, and then use one 2-dimensional lookup table to find phase angle and another for magnitude. (There are also simple algebraic functions which approximate magnitude pretty well. That might be another interesting topic for the next article in this series.) By 2-dimensional, I mean that half of the address bits are used for I and half for Q and the output of the LUT gives you arctan(Q/I).

-Kevin Neilson

Author

betajet 3/15/2014 7:57:12 PM

Do you have a suggestion for arctan?

Author

TanjB 3/15/2014 7:38:22 PM

Given the small word length of your inputs you can afford to run tests which exhaustively check every value for glitches like that.

Author

Kevin Neilson 3/14/2014 12:02:59 AM

CORDIC is OK if you want a lot of precision and you have a lot of time to blow. It's not as valuable as it was in the pre-DSP48 days.

-Kevin Neilson

Author

tom-ii 3/13/2014 6:10:43 PM

Thanks!

Author

MikePDX 3/13/2014 6:09:09 PM

http://www.andraka.com/cordic.htm

Very clearly explained, you'll understand CORDIC after reading Ray.

Author

tom-ii 3/13/2014 5:43:54 PM

That's the 1st I've heard of "BAM." Thanks for the info!

As for CORDIC, I've still not quite wrapped my head all the way around it, yet, but I see it takes some pre-computed lookup tables for it to work. Which is okay, I suppose, but the purpose of this set of exercises was to get some decent fixed-point libraries put together.

The other thing about this silly project was that it got the computation done in a single (albeit slow) clock tick.

FWIW, and as you point out, YMMV ;)

Author

betajet 3/13/2014 5:19:28 PM

arctan. CORDIC only uses shifts and ADD/SUB. I'd also stick to fixed-point arithmetic, since the values of sin and cos only go between -1 and +1.As far as representing angles, I like Binary Angular Measurement (BAM). You represent an angle as a fixed-point binary fraction of a circle. You use every bit of your number representation, and BAM automatically calculates angles modulo 360 degrees due to the modulo 2^N nature of binary arithmetic.

BAM angles have the nifty property that unsigned and two's complement arithmetic are equivalent. You can think of angles as being from 0 to just less than 360 degrees using unsigned arithmetic, or from -180 to just less than 180 degrees using 2's complement arithmetic.

JMO/YMMV