Breaking News
Blog

Doing Math in FPGAs, Part 3 (Floating-Point)

NO RATINGS
4 saves
Page 1 / 2 Next >
View Comments: Newest First | Oldest First | Threaded View
<<   <   Page 2 / 3   >   >>
tom-ii
User Rank
Blogger
Re: Erroneous example
tom-ii   1/8/2014 8:43:43 AM
NO RATINGS
@azassoko:

 

Sorry to disappoint you, but 101.011 = 5.375, not 5.15.

 

D'oh!  Would you believe I'm no good at math?

 

Sic transit gloria mundi

 

And if this is how the world progresses, then we're in trouble...  Oh, wait...

tom-ii
User Rank
Blogger
Re: Signed magnitude
tom-ii   1/8/2014 8:37:33 AM
NO RATINGS
@Betajet:

I believe the main reason for S/M representation is that it simplifies multiplication and division.  2's complement multiplication is a pain and requires more logic (IIRC) and division is hard enough without dealing with signs.  With S/M, you just do unsigned multiplication and division and then XOR the sign bits.

 

Shhh!  Spoilers!

azassoko
User Rank
Rookie
Erroneous example
azassoko   1/8/2014 8:11:23 AM
NO RATINGS
Tom,

Sorry to disappoint you, but 101.011 = 5.375, not 5.15. 

Sic transit gloria mundi...

AZ

cpetras
User Rank
Rookie
Decimal floating point...
cpetras   1/7/2014 5:04:49 PM
NO RATINGS
The IEEE 754 standard (2008) has introduced decimal floating point.  http://en.wikipedia.org/wiki/IEEE_754-2008 http://en.wikipedia.org/wiki/Decimal_floating_point htttp://www.cl.cam.ac.uk/~jrh13/papers/decimal.pdf  Which is really great if you need to do anything financial.

betajet
User Rank
CEO
Re: When you have limited bits to play with...
betajet   1/7/2014 4:57:00 PM
NO RATINGS
As Tom points out, floating-point is similar to scientific notation.  Indeed, one of its main uses historically has been for scientific number crunching.  In science, you're always working with numbers you have measured with limited accuracy, so there's limited value in performing calculations more precisely than the figures going into them.  What's the point of calculating the resistance of a resistor network to 20 digits of precision when they're at best 1% resistors?

Max The Magnificent
User Rank
Blogger
Re: Signed magnitude
Max The Magnificent   1/7/2014 4:52:30 PM
NO RATINGS
@betajet: If you try comparing floating-point numbers using fixed-point compare instructions you deserve what you get :-)

LOL

betajet
User Rank
CEO
Re: Signed magnitude
betajet   1/7/2014 4:47:25 PM
NO RATINGS
I believe the main reason for S/M representation is that it simplifies multiplication and division.  2's complement multiplication is a pain and requires more logic (IIRC) and division is hard enough without dealing with signs.  With S/M, you just do unsigned multiplication and division and then XOR the sign bits.

2's complement arithmetic is good if you're always adding signed numbers, but if your logic needs to both add and subtract, then you've got to complement one of the operands anyway.  I believe floating-point hardware does S/M add/sub using one's complement arithmetic -- I remember studying the IBM 360/91 paper that shows this in detail.  The fun part of one's complement arithmetic is the end-around carry.  To do this fast, you need carry look-ahead end-around carry logic, which turns out to be highly regular and beautiful.

Regarding +0 and -0: I should think the floating-point compare instructions take care of this.  If you try comparing floating-point numbers using fixed-point compare instructions you deserve what you get :-)

Max The Magnificent
User Rank
Blogger
When you have limited bits to play with...
Max The Magnificent   1/7/2014 4:41:04 PM
NO RATINGS
I think the thing that confuses a lot of beginners with regard to floating point is why we bother to do it in the first place.

Let's start with the fact that whatever format we decide to use to represent numbers inside a computer, they all end up being stored as a sequence of 1s and 0s.

The thing is that, given a field of a fixed size, what sort of numbers can you store in it? Let's take the arduino, because that's what I'm playing with at the moment. Consider a variable of type "long" -- this consumes 32 bits (4 bytes) and can be used to store integers in the range -2,147,483,648 to 2,147,483,647. These are big numbers and they are stored with absolute precision (so long as we only want integers), but what if we want to represent some value outside this range?

This is where floating-point comes in. A floating point value in an Arduino also consumes 32 buts (4 bytes), but using the format you discuss (sign, mantissa, exponent), it can be used to represent values as large as 3.4028235E+38 and as small as -3.4028235E+38. This gives a humongous dynamic range, but at the loss of precision (these values have only 6-7 decimal digits of precision).

 

tom-ii
User Rank
Blogger
Re: Signed magnitude
tom-ii   1/7/2014 4:33:16 PM
NO RATINGS
@Max: Sorry for typos - answering from cell phone, as I'm travelling... Yes, I actually do know why. Pos & neg zero are important for completeness. This allows a divide by zero to give a correct answer - that is, pos or neg infinity

Max The Magnificent
User Rank
Blogger
754 is great, but...
Max The Magnificent   1/7/2014 4:30:56 PM
NO RATINGS
754 is a really powerful standard, and it includes lots of really clever stuff to squeeze the most out of the format, but the result is that the floating-point library C/C++ code required to perform the math operations is pretty large.

One alternative, as you say, is to "roll your own". I've often toyed with the idea of creating my own 16-bit (Half Precision) library (providing reasonable dynamic range with limited precision, which would be applicable to certain applications) with stripped-down functionality so as to reduce the memory footprint ... but there never seems to be enough time ... I wonder if any of the other readers have done this?

<<   <   Page 2 / 3   >   >>
Top Comments of the Week
August Cartoon Caption Winner!
August Cartoon Caption Winner!
"All the King's horses and all the KIng's men gave up on Humpty, so they handed the problem off to Engineering."
5 comments
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Radio
LATEST ARCHIVED BROADCAST
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.
Flash Poll