
Sounds a bit like Goldschmidt division: converting the factor 1/10 into X/(2^n). Multiply by X, shift N bits, done. Taking the upper 16 bits of a 32bit word is equal to a 16bit shift.
I suspect 1/10 doesn't translate perfectly to binary so, just like 1/3 becomes 0.33... you get a hex factor of 0x1999... in your division.
Re: Goldschmidt?
AZskibum
12/7/2013 11:52:18 AM
These are clever tricks, but why are you stuck on dealing with base 10, when ultimately you're implementing all the operations with shifts, adds & subtracts in base 2?
Re: Goldschmidt?
tomii
12/7/2013 5:43:27 PM
These are clever tricks, but why are you stuck on dealing with base 10, when ultimately you're implementing all the operations with shifts, adds & subtracts in base 2?
I didn't intend to imply that I was stuck with base 10. Base 10 is just natural for us humans, and there are plenty of applications that use it a lot, especially when the end result is decimal math. In this specific application, I am sending the computation engine decimal numbers, and expecting them in return. It was worth my while to run down the rabbit hole to see if there was an immediately easy way to get it done this way.
math links
Brian_D
12/6/2013 7:56:58 PM
Re: math links
krisi
12/7/2013 10:01:00 AM
Pretty cool math...I wonder whether they teach something like that in vlsi classes...Kris
@Kris: Pretty cool math...I wonder whether they teach something like that in vlsi classes...
I'd be interested to knwo that myself  but I fear not. When I started out, everyone knew and "swapped" tips and tricks like this  it was key to making programs run as fast as possible when you were limited in terms of memory size, clock frequency, and the fac tthat CPUs too multiple cycles to do anything.
Magazines like BYTE were always publishing stuff like this... I love this stuff
Re: math links
DrFPGA
12/9/2013 11:02:27 AM
Remember when we had to do many of the FPGA tricks by hand? Register retiming, one hot state machines, if then else to mux conversions? Now the synthesis tools do all this for us. Hopefully tricks like the multiply by a constant will get folded in too. Just depends on how common the need is for these tricks I guess.
Maybe there should be a 'list' of tricks that users suggest would be good to put into synthesis tools. Perhaps via a discussion thread on Programmable Logic Design Line?
Know the limitations of your tools
betajet
12/9/2013 2:20:02 PM
I remember the first time I synthesized an FPGA design for a real product. The synthesizer automatically came up with a onehot state machine coding. Everything simulated fine. Unfortunately, this particular design did not have a wellformed clock when connecting or disconnecting the input signal, which would cause it to lose the onehot code occasionally when reconnecting. Switching to a manual state coding that automatically recovered from bad states fixed the problem  and saved logic cells in this case.
Lesson 1: Don't trust automatic state assignment. Some synthesizers make it hard to turn off automatic state assignment, but it's worth the effort.
Lesson 2: Don't trust simulation. It only simulates theoretical models, not the real world.
Lessons 1+2 combined and generalized: Know the limitations of your tools.
Re: Know the limitations of your tools
DrFPGA
12/9/2013 2:37:47 PM
Very important point and one difficult to do in practice. How many tools actually document (in detail) what the tool will do or not do. I find I need to do small example designs and look over th output of the place and route tool to figure out what is going on. Widh the vendors did this for me!
Anyone else have tools with detailed enough docs that they can tell ahead of time what is being done?
Re: math links
tomii
12/7/2013 5:49:11 PM
Brian,
Interesting stuff. To be honest, though, I'm a complete n00b, and am teaching myself Verilog at this point.
@Brian: There's some nice material about constant division on the companion website for "Hacker's Delight"
I was going to mention that book Hacker's Delight  it's a great book  lots of useful tricks in it. I just now found there's a second edition (the link above)  I have th efirst  I'll have to add this second edition to my wish list :)
How computers used to do binary multiply and divide
KarlS01
12/7/2013 11:11:18 AM
Hi, Tom: As you said, multiply is a series of additions  but not dependent on the magnitude of the multiplier, only the number of 1 bits after making both operands positive by complementing if negative.
It is a shift and add sequence starting with the low order bit of the multiplier if it is a 1 add the multiplicand to the double wide product high order and shift right 1 into low order. if multiplier bit is zero, just shift product right one.
Repeat until higher multiplier bits are zero or shifts equal to multipl;ier width. If the high bits are all 0s, then just shift for the remaining word width.
Division was trial subtraction by subtracting the divisore fron the dividend, if the result was positive shift 1 into the quotient high bit else shift 0. Remainder is left in the reg that held the high order dividend and the quotient in the low order.
If the signs of the dividend and divisor were different, complement at the end.
This is best I remember, maybe a few details missing. The shifts amounted to *2 and /2 and the add/subtract would wind up in the appropriate power of two positions.
I think the constant used in the compiler is 1/10 so they are multiplying by the reciprical of 10 to effectively divide by 10.
Fractional arithmetic
dtejada201
12/7/2013 1:13:51 PM
The way I look at this is as a modified form of Q15 arithmetic. For starters, I can represent a fractional number as a 16 bit fixed point signed integer by the following relationship:
32768 <> .5 32768 <> .5
Thus .5 is 0x 7FFF (almost). To get .1, I divide by 5 and .1 is 0x199A. This is where the 0x199A factor comes from. When I multiply 2 Q15 numbers, I get a 32 bit result a 32bit Q31 result. This means .25*.1 is as follows:
.25 => 0x4000 .1 => 0x199A
0x4000 * 0x199A => 0x06668000 0x666800 >> 16 is 0x0666 => which corresponds to .025
Hope this helps
Re: Fractional arithmetic
tomii
12/7/2013 5:45:42 PM
I figured as much, but just hadn't actually sat down and worked it out. Without knowing for a fact, I didn't want to spout complete nonsense.
...
I spout enough nonsense as it is...
Re: Fractional arithmetic
sa_penguin
12/7/2013 7:50:19 PM
I broke out Excel and did some quick fractions.
For a 12bit number: 1/10 = 409.6/4096 and 410 = 19AH 16 bits: 1/10 = [6,553.6] / [65,536] and 6,554 = 199AH 24 bits: 1/10 = [1,677,721.6] / [16,777,216] and 1,677,722 = 19 999AH
Yes, I have a habit of putting commas in decimal numbers and spaces in hex numbers. I find it makes them easier to distinguish.
Fortran is lying to you
betajet
12/7/2013 2:49:25 PM
Floatingpoint numbers are not real numbers. Real numbers obey the associative law of addition. Floatingpoint numbers do not. Try adding 1 to an accumulator 10^9 times with sixdigit floating point. Once the accumulator has reached 10^6, adding more ones doesn't change the accumulator, so the sum of 10^9 ones is 10^6 instead of 10^9. If you add the 1's in groups of 10, and then add those sums in groups of 10, and so on, you'll get the correct value. However, since the result depends on the order of addition, the floatingpoint numbers violate the associative law. Don't expect floatingpoint to behave like real numbers without considering these effects.
Nonnegative integers, OTOH, do behave mathemically like modulo 2^n numbers so you do get the correct result modulo 2^n.
I agree with the above poster regarding using a decimal radix. Why not use base 2 like IEEE floating point or base 16 like IBM/360?
Re: Fortran is lying to you
AZskibum
12/7/2013 4:25:49 PM
Not only do floating point numbers not obey associativity, they also lack precision. Sure, if 24 bits of precision doesn't meet your needs, you can go to double precision, but both formats are wasteful when your doing arithmetic in hardware.
Fractional fixed point (often referred to as "Q" format) is efficient  you choose exactly the precision you need, no less and no more  and the bookkeeping exercise of keeping track of the radix point is not a big deal.
Re: Fortran is lying to you
tomii
12/7/2013 5:48:22 PM
@AZskibum:
Fractional fixed point (often referred to as "Q" format) is efficient  you choose exactly the precision you need, no less and no more  and the bookkeeping exercise of keeping track of the radix point is not a big deal.
This is actually what I used in my final implementation  We'll get to that in a few weeks, I suppose.... (Max willing)
Re: How FPGA do math
tomii
12/8/2013 10:31:35 AM
@Adam:
Of course I don't mind!
The divider works because the 0x199A for a 16 bit number is the reciprocal of 10 which is 1/10 or 0.1
This is then scaled by integer(2**16 * 0.1) = 6554 or 0x199A doing the multiply then works out the same as doing the divide. This trick can be used for any number when the divisor is fixed.
Re: How FPGA do math
DrFPGA
12/8/2013 11:49:13 PM
Adam
Thanx for posting the link to your article. Very detailed and even had some code!
You are welcome I tried to make it as simple as possible to explain within the 2000 word limit how to do maths in FPGA's
The more complex things like filters and CORDICs etc are covered in other articles
I find a simple example often helps to demonstrate the principles
its fixed point math
paragdighe
12/10/2013 4:45:31 AM
here's why it works: x/10=x*0.1={2^16*x*0.1}/2^16={x*2^16*0.1}/2^16 =x*1999H/2^16




5/25/2017 11:28:36 AM
5/25/2017 11:28:36 AM
5/25/2017 10:08:35 AM
5/25/2017 10:05:29 AM
5/25/2017 10:01:05 AM
5/25/2017 9:47:13 AM
5/25/2017 9:31:49 AM
5/24/2017 10:15:12 PM
5/24/2017 10:09:47 PM
5/24/2017 10:06:06 PM

