Breaking News
Blog

# Doing Math in FPGAs, Part 2 (BCD)

Page 1 / 2 Next >
User Rank
Author
Why use BCD?
12/24/2013 11:24:58 AM
NO RATINGS
Hi Tom -- you've touched on a topic that's close to my heart. As you mention, there are some numbers that are easy to represent in decimal (like 0.1) that cannot be exactly represented in binary (the equivalent to 0.1 is an ever-recurring pattern of 1s and 0s -- since we don't have infinatly large fields in which to store these numbers, we have to truncate, which leads to errors).

Of course, there are some numbers that are difficult to specify in decimal (base-10) -- take 1/3, for example, which equares to 0.333333r (recurring). This would be easy to specify in certain other bases. Similarly, there are some values that are easy to represent in base-2 that cannot easily be represented in base-10.

You mention one reason for using BCD is when you are accepting input from decimal-based devices or providing output to decimal-based devices. There is another big reason -- in many countries it is mandated by law that any financial calculations performed on a computer must return EXACTLY the same results as if they had been performed with pencil and paper. This is so that everyone knows exactly what will happen with regard to rounding values and so forth. The only way to achieve this is for the calulations to be performed using some sort of base-10 representation, like BCD. Check out some blogs I wrote on this topic a few years back:  Binary Coded Decimal (BCD) 101 and BCD coming to an FPGA near you

User Rank
Author
Re: Why use BCD?
12/26/2013 3:18:44 PM
NO RATINGS
Nice presentation of BCD arithmetic, even if it is not often used anymore. I'm looking forward to your installments on fixed vs. floating point.

User Rank
Author
Re: Why use BCD?
12/27/2013 9:28:38 AM
NO RATINGS
Thanks, I appreciate it.

I primarily hope to not prove the old addage about letting people think yu a fool, vice opening your mouth and proving you're one...  But then, I freely admit that I am, indeed, an idiot, so just keep that in mind, okay?

:)

User Rank
Author
What's real?
12/24/2013 11:34:04 AM
NO RATINGS
@Tom: You said There are plenty of ways to represent "real" numbers in a binary system.

The "real" numbers confused me for a moment because in mathematics a real number is a value that represents a quantity along a continuous line -- including integers, fractions, and irrational numbers such as the square root of two. But I think that when you said "real" numbers you meant "numbers in the real world" .. is that right?

User Rank
Author
Re: What's real?
12/25/2013 12:14:02 PM
NO RATINGS
No, I actually meant the wealth of numbers that are on the real axiz of the complex plane (ow, that makes my head hurt).  A decimal is a fraction (maybe), if you can just figure out what it is.  Now, I'm not trying to say any of those ways you can represent the real numbers is correct, because we're almost always going to lose something somewhere.

Like I keep saying, evaluate your alternatives, and choose the method(s) that best match your needs/requirements.  My way most assuredly is not the best way, and your mileage (like my own) may vary.

User Rank
Author
Re: What's real?
12/25/2013 1:39:28 PM
NO RATINGS
@Tomii: One interesting point I forgot to mention is that you can use BCD representations to represent anything you want -- so you could use BCD to represent integer, fixed-point, or floating-point values -- now my head hurts :-)

User Rank
Author
Ways of representing numbers.
12/24/2013 11:40:29 AM
NO RATINGS
@Tom: I would guess that the three most common are probably Binary Coded Decimal (BCD), Floating Point, and Fixed Point.

I guess I would have said: Integer (unsigned and signed), Fixed-point, Floating-point, and BCD. Also, integers may be considered to be a special case of fixed-point (i.e., fixed-point without the fractional part).

Traditionally FPGA designers worked predominantly with integer or fixed-point (with some BCD for hardward acceleration targeted at financial applications / high-performance computing). More recently, they are being used to implement floating-point calculations for things like beam-forming and radar applications -- Altera has some real cool stuff in this regard that helps you map a floating-point pipeline onto their DSP slices and programmable fabric.

User Rank
Author
Nine's and ten's complements
12/24/2013 11:49:24 AM
NO RATINGS
Oooooh, I LOVE this stuff. A lot of folks never think about this. In the case of an 8-bit binary value, for example, we can consider it to represent different things, like:

UUUUUUUU  An 8-bit unsigned value (0 to 255 in decimal)

SIIIIIII        A sign-magnitude value (-127 to +127; includes -0 and +0)

SIIIIIII        A two's complement value (-128 to +127; only one 0)

In the case of the sign-magnitude format, the sign-bit just represents + (0) or - (1). In the case of the two's complement representation, the sign bit represents a quantity: 0 = 0, 1 = -128).

The same thing applies to BCD. If we take an 8-bit field, for example, thsi can be used to represent:

UU     An unsigned value from 00 to 99

SU     A sign-magnitude value from -9 to +9; includes -0 and +0)

SU     A ten's complement value (-50 to +49 in decimal)

User Rank
Author
Wrapping one's brain around ten's complements
12/24/2013 11:53:26 AM
NO RATINGS
A lof if folks find it hard to wrap their brains around ten's complements in th econtex tof BCD arithmetic. I like the following graphic (well, I created it, so I would, wouldn;t I?):

Actually, I wrote some articles about this a few years ago (yes, I know you find thsi difficult to believe :-)  Check out the following:

Mind-boggling math: BCD (binary coded decimal)

More mind-boggling math: Adding and subtracting unsigned BCD

Even more mind-boggling math: Working with signed BCD

User Rank
Author
A minus B=A+NOT(B)+1
12/24/2013 12:00:48 PM
NO RATINGS
Still on nine's and ten's complements -- as you noted the way we perform a subtraction in binary is based on the fact that performing the operation (a - b) is the same as performing the operation ((a + (-b)). In binary, the -b is the two's complment of +b, and we can easily generate this by inverting all of the bits in b (to generate the one's complemet) and then adding 1. It's also easy to do thsi in hardware, because we already have an adder, so we just invert all of the bits in b and then we set the carry-in to the adder to 1 (which equates to the "add 1").

The cool thing is that we can do much the same thing with nine's and ten's complements.

For those who want to learn more, you can check out my book How Computers Do Math. Also, you might find the following two columns to be of interest:

A minus B=A+NOT(B)+1(Part 1)

A Minus B=A+NOT(B)+1 (Part 2)

User Rank
Author
Multiply is a series of additions? Divide?
12/26/2013 4:45:10 PM
NO RATINGS
Is multiply by 1000 done with 1000 additions and divide by 1000 done with 1000 subtractions?

Also, RISC does not have decimal/bcd instructions so computers have another way of doing decimal.

Here is a little thought going back to scientific notation and a little C# evaluation run on a PentiumD.

// this was done with  8.5, 5.25 and  8.5, 5.33  inputs
double scale = 8.5 * 5.33;                // 44.625      45.305
double scale1 = 8.5 * 5.33 * 1000;   // 44625.0     45305.0
int scale2 = 85 * 533;                      // 44625        45305
int scale3 = scale2 / 1000;               // 44              45
int scale4 = scale2 % 1000;             // 625             305

It is a matter of formatting yhe output to include the decimal point between the divide and modulo result when done on a PC.  FPGA needs hardware to handle input and output formatting.

User Rank
Author
Re: Multiply is a series of additions? Divide?
12/27/2013 9:26:43 AM
NO RATINGS
@KarlS01

Is multiply by 1000 done with 1000 additions and divide by 1000 done with 1000 subtractions?

Well, no.  I was kind of waving my hands and glossing over dirty details.  Certainly one way to do it is to do repeated additions or subtractoins (you can do division via addition, as well).  When you get down into the guts of the computing machinery, though, we can go about this in different ways - look at the way we do these operations by hand, and we can do things similarly in the computer.

In BCD it's a little harder, but we can do the following type of thing, where each digit has its own combinational cloud to do the multiplication:

From Wikipedia:
```      123
x 456
=====
738  (this is 123 x 6)
615   (this is 123 x 5, shifted one position to the left)
492    (this is 123 x 4, shifted two positions to the left)
=====
56088```

So let's talk about pure binary as an example.  For multiplication, you can actually synthesize a 2N-bit combinational cloud to do your multiplications.  Lacking that, you can build a state machine that's pretty easy, as binary is a special case.  From the same Wikipedia page:
```        1011   (this is 11 in decimal)
x 1110   (this is 14 in decimal)
======
0000   (this is 1011 x 0)
1011    (this is 1011 x 2, shifted one position to the left)
1011     (this is 1011 x 4, shifted two positions to the left)
+ 1011      (this is 1011 x 8, shifted three positions to the left)
=========
10011010   (this is 154 in decimal)```

So, you can see we can do a multiply in (potentially) a lot less operations than just adding them all up.

I mentioned combinationall clouds, so let's touch on that a bit.  For all possible N-bit multiplications, a truth table can be made, and a set of discrete logic implemented to perform that multiplication.  I will point you to this paper for a quick overview - they show how to implement a 4-bit combinational multiplier and its complexity.  The modern synthesis tools appear to be able to create a multiplier of at least 32 bits with no issue (I've not tried anything bigger), and modern FPGAs have dedicated multiplier circuits in them (or so it would seem), but I don't know how many bits wide they are.

User Rank
Author
Re: Multiply is a series of additions? Divide?
12/27/2013 11:33:22 AM
NO RATINGS
@Tomii:  ALTERA has 9 bit multipliers and 32 bits takes 4 when synthesized.  hamster on APP wrote about sign extension for negative numbers also.

This is for 32 bit binary.  Recently they have done a lot with floating point, but I did not follow up.

I don't think decimal instructions are available in any RISC.  In fact floating point often requires an optional co processor and there is a sign, exponent, mantissa to deal with.

The whole thing is such a mess that converting to hex and formatting output seems easier. Yeah, you need a computer, compiler, etc.

User Rank
Author
Re: Multiply is a series of additions? Divide?
1/21/2014 11:54:12 AM
NO RATINGS
Hi, Tom:  Things are too quiet here.  Altera has 9 bit multipliers and it takes 4 to do 32 bits.

The concerns about precision seem seem exaggerated even in financial calculations.  There are rounding methods such that insignifican amounts are discarded.

Given all the complexity of floating point HW, Why not convert the decimal to Hex/Binary and do the algorithm as if doing decimal with pencil and paper then output the whole number part and fractional part converted from binary to decimal?

Too bad this site has no capability to attach anything technical.

User Rank
Author
Re: Multiply is a series of additions? Divide?
1/21/2014 12:42:41 PM
NO RATINGS
@KarlS01:

The concerns about precision seem seem exaggerated even in financial calculations.  There are rounding methods such that insignifican amounts are discarded.

I've seen the repeated additions and other mathematical operations rapidly create some ugly mistakes.  I once had a CS class that focused specifically on this, and ways to work within the constraints of the hardware to overcome these errors.  It can take a fair amount of massaging!

User Rank
Author
Re: Multiply is a series of additions? Divide?
1/21/2014 2:35:31 PM
NO RATINGS
@tom-ii:

double dz = 56.75 * 2.3; // 130.5249999999998
int dx = 5675, dy = 23;
int dw = 5675 * 23 / 1000; // 130  ,,, 3 decimal points
int ddy = dx * dy % 1000; // 525  ,,, compiler rounded up
int ddyy = dx * dy % 1000000; // 130525 ( too many zeroes)

Just as on paper, ignore thr decimal point but count decimal points to choose the divisor,  divide gets integer quotient and modulo gets numerator of the fraction.

User Rank
Author
10's Complement
12/27/2013 7:30:22 PM
NO RATINGS
I like the article and the subtraction algorithm.

I think there is a typo in the article on its 2nd page - instead of "Taking the nine's complement of 5,940 gives .." I think there should be "Taking the 109's complement of 5,940 gives..."

The nine's complement of 5940 gives 4059, to get ten's complement we need to add 1, which finally gives 4060. Also the difference to the first computation 4337 - 277 is that we need to add the +1 to the result (use 109's complement) because the result is negative.

If everybody knows this, sorry to disturb :-).

User Rank
Author
Re: 10's Complement
12/27/2013 8:37:32 PM
NO RATINGS
I find the phrase "109's complement" to be disturbing :-)

But yes, you're right that the author wants to take the 10's complement of 5940 to express it as -4060.

User Rank
Author
Re: 10's Complement
1/21/2014 12:39:55 PM
NO RATINGS
@betajet:

But yes, you're right that the author wants to take the 10's complement of 5940 to express it as -4060.

Indeed, I'm sure this is what the author meant...  ;)

User Rank
Author
It could be worse
12/28/2013 3:44:39 AM
NO RATINGS
BCD uses 4 bits, and has 6 "bad" codes for 10 "good" codes: an overhead of 60%.  Have you heard of BCK?

Binary Coded Kilo: uses 10 bits, has 24 "bad" codes for 1000 "good" or 2.4% overhead. That equates to 2 bits saved in the code.

The version I saw was in a DDS [Numerically Controlled Oscillator]. 32 bits make 4,000,000,000 instead of 4,294,967,296.  So a 200MHz oscillator gave a frequency strep of precisely 0.05 Hz instead of 0.046566128731 Hz [ie. subject to oscillator drift, not rounding error]