The task is to compute the sine of a number between 0 and 360 degrees in an FPGA, and then present the result to the outside world in a manner that is useful.
It might seem, perhaps, that I am a habitual liar. Twice, now, I've told you I was done with my ramblings about math-related things. Yet here I am, getting ready to ramble on about math-related things. This is surely the last time, I promise!
So, I had a discussion with Max Maxfield some time ago, and it went along the lines of: "I'm doing this silly project..." And Max said something to the effect of: "Wow, that is silly! But it would be great fodder for a blog wherein you ramble aimlessly about engineering decisions on how and why you did things the way you did!"
Well... here we are. I've mentioned this project in my past blogs on math, so here's what I was doing. The project was to compute the sine of a number between 0 and 360 degrees in my FPGA, and then present it to the outside world in a manner that was useful.
This is all well and good, but how in the heck do we compute the sine of a number using digital logic? This could lead to a great treatise on polynomial expansions of functions; but while I am greatly expanding, these days, I am not party to any treaties (that I know of). So, instead, I'll gloss over all that, and instead just mention Mr. Taylor. I think I'll go with a Taylor expansion, rather than using a CORDIC algorithm and look-up table (although, I may revisit this later, as one of my upcoming projects will likely need look-up tables). There's also a very interesting paper on transcendental functions in Verilog, but this is primarily for use in test benches. I've put it into my hip pocket for the future -- maybe it will prove useful at some later date.
Before we go any further, it is absolutely imperative to understand that all of these methods are approximations only. It's also important to understand the limitations of your "simulation," the boundary conditions, and what can go wrong if you aren't prepared for the differences from "reality." With that thought, I took a quick look at Wolfram's site, and it looks to me like a 7th-order polynomial will be the minimum that's sufficient to represent the sine function over the full range (note the output is only good between -Π and +Π). In fact, a 9th-order polynomial would be a bit better -- we'll see the effects of using only a 7th-order later in this blog.
So, you've forgotten what a Taylor expansion is? See the image below, and you'll see why I thought this was an excellent exercise for learning the use of the math functions in my library. Recall, though, that I wanted to avoid the use of signaling and multiple clocks so I could just focus on gluing everything together.
Unfortunately, the Taylor expansion for the sine function requires a lot of dividing, and we've already figured out that division is kind of hard on FPGAs. So how do we work our way around this? Well, in our sine generator we could solve the function using division, but that would take multiple clock ticks and signaling to know when the division is complete. Instead, we'll pre-compute the coefficients and perform multiplications. This is because my FPGA (a Xilinx Spartan 3E) can support fast multiplication using combinatorial logic.
Of course, we'll pay for this in terms of resource utilization and likely speed of ticks (there'll be several layers of combinational logic), but then... I dunno; maybe, for some applications, it's worth it? And, of course, we could also pipeline the whole thing, but for now I think I'll stick with one giant "combinational cloud."
Now, I realize this will make a humongous (and slow) cloud, but what I was wanting to do (for now) is to just understand the math without other distractions from registering, pipelining, etc... So for now, please ignore the white elephant standing on the man behind the green curtain, and let's not get distracted about how silly this particular implementation might be (ZOMG, 298 levels of combinational logic!), okay?
To Page 2 >