
You can do arctan with a modified version of CORDIC. You can also do something similar to the method I described below, but using the Farrow technique. It's not easy to get the derivative of arctan as it is for sine, so you use a blockRAM LUT for arctan, another for the derivative of arctan, and possibly another for second derivative. Then you can do a Taylor expanson to interpolate between the LUT values in the first LUT by using the derivatives in the other LUTs. Use Horner's Method again to save multiplies. The Farrow method works for arbitrary functions.
So if you want to know the arctan between LUT values N and N+1, you take the N value from the first LUT, add 1/2 * the Nth value in the second LUT, and (1/2)^2 / 2! * the Nth value in the 3rd LUT. (Not taking into account Horner's Method.) There are probably a lot of other techniques for arctan that smart people have come up with.
If you're doing arctan on a complex IQ value in order to find its phase, for demodulation purposes, you can take the absolute value to map it into the first quadrant, and then use one 2dimensional lookup table to find phase angle and another for magnitude. (There are also simple algebraic functions which approximate magnitude pretty well. That might be another interesting topic for the next article in this series.) By 2dimensional, I mean that half of the address bits are used for I and half for Q and the output of the LUT gives you arctan(Q/I).
Kevin Neilson
Re: Here's how it's done
betajet
3/15/2014 7:57:12 PM
anon: That's a very nice way to do sine. Thank you for posting it.
Do you have a suggestion for arctan?
Don't forget to test
TanjB
3/15/2014 7:38:22 PM
Beware of maintaining mathematical properties such as monotonicity. A classic problem with rough estimates from tables is to have the interpolations at the joints between section be discontinous or even slightly backwards step. Even when you think you have the right formula all you need is one of them to falsely round down and the other to falsely round up, which can happen with a fraction of lsb inaccuracy. Since an implementation can end up being reused somewhere else you probably want to be sure it is suitable for unexpected uses.
Given the small word length of your inputs you can afford to run tests which exhaustively check every value for glitches like that.
The pure Taylor works, but it just eats up too much logic. Here's how it's normally done: You start with a lookup table in blockRAM. There is plenty in a Xilinx part. You then do a Taylor to interpolate between the LUT values. A firstorder (linear interp) or secondorder is usually good. You need the derivative of sine at the LUT points. The derivative is cosine, which you get from a different address in the same LUT. Since the blockRAM is dualport, you read it out on the second port on the same cycle. Now you multiply this by the lowerorder address bits (left over after using the upper bits for the LUT address) and bitshift and add to the LUT value. Booyah, 1storder Taylor. If you want to do higher orders the derivatives are easy (either sin or cos from the LUT or their opposites) and you use Horner's Method to reduce the number of multiplies necessary. This method is also superfast when you use blockRAMs and DSP48s. This is usually what you'll get if you use a core.
CORDIC is OK if you want a lot of precision and you have a lot of time to blow. It's not as valuable as it was in the preDSP48 days.
Kevin Neilson
Re: I'd definitely stick to CORDIC
tomii
3/13/2014 6:10:43 PM
Re: I'd definitely stick to CORDIC
MButts
3/13/2014 6:09:09 PM
Check out Ray Andraka's paper about CORDIC on FPGAs at his site:
http://www.andraka.com/cordic.htm
Very clearly explained, you'll understand CORDIC after reading Ray.
Re: I'd definitely stick to CORDIC
tomii
3/13/2014 5:43:54 PM
@betajet:
That's the 1st I've heard of "BAM." Thanks for the info!
As for CORDIC, I've still not quite wrapped my head all the way around it, yet, but I see it takes some precomputed lookup tables for it to work. Which is okay, I suppose, but the purpose of this set of exercises was to get some decent fixedpoint libraries put together.
The other thing about this silly project was that it got the computation done in a single (albeit slow) clock tick.
FWIW, and as you point out, YMMV ;)
I'd definitely stick to CORDIC
betajet
3/13/2014 5:19:28 PM
IMO, CORDIC is a great way to compute sine and cosine, and also arctan. CORDIC only uses shifts and ADD/SUB. I'd also stick to fixedpoint arithmetic, since the values of sin and cos only go between 1 and +1.
As far as representing angles, I like Binary Angular Measurement (BAM). You represent an angle as a fixedpoint binary fraction of a circle. You use every bit of your number representation, and BAM automatically calculates angles modulo 360 degrees due to the modulo 2^N nature of binary arithmetic.
BAM angles have the nifty property that unsigned and two's complement arithmetic are equivalent. You can think of angles as being from 0 to just less than 360 degrees using unsigned arithmetic, or from 180 to just less than 180 degrees using 2's complement arithmetic.
JMO/YMMV




4/23/2017 11:06:06 PM
4/23/2017 11:06:06 PM
4/23/2017 9:56:39 PM
4/23/2017 9:58:19 AM
4/23/2017 8:35:25 AM
4/23/2017 8:00:16 AM
4/23/2017 7:57:57 AM
4/23/2017 7:57:17 AM
4/23/2017 7:57:13 AM
4/23/2017 7:52:42 AM

