
Re: Erroneous example
tomii
1/8/2014 8:43:43 AM
@azassoko:
Sorry to disappoint you, but 101.011 = 5.375, not 5.15.
D'oh! Would you believe I'm no good at math?
Sic transit gloria mundi
And if this is how the world progresses, then we're in trouble... Oh, wait...
Re: Signed magnitude
tomii
1/8/2014 8:37:33 AM
@Betajet:
I believe the main reason for S/M representation is that it simplifies multiplication and division. 2's complement multiplication is a pain and requires more logic (IIRC) and division is hard enough without dealing with signs. With S/M, you just do unsigned multiplication and division and then XOR the sign bits.
Shhh! Spoilers!
Erroneous example
azassoko
1/8/2014 8:11:23 AM
Tom,
Sorry to disappoint you, but 101.011 = 5.375, not 5.15.
Sic transit gloria mundi...
AZ
Decimal floating point...
cpetras
1/7/2014 5:04:49 PM
The IEEE 754 standard (2008) has introduced decimal floating point. http://en.wikipedia.org/wiki/IEEE_7542008 http://en.wikipedia.org/wiki/Decimal_floating_point htttp://www.cl.cam.ac.uk/~jrh13/papers/decimal.pdf Which is really great if you need to do anything financial.
Re: When you have limited bits to play with...
betajet
1/7/2014 4:57:00 PM
As Tom points out, floatingpoint is similar to scientific notation. Indeed, one of its main uses historically has been for scientific number crunching. In science, you're always working with numbers you have measured with limited accuracy, so there's limited value in performing calculations more precisely than the figures going into them. What's the point of calculating the resistance of a resistor network to 20 digits of precision when they're at best 1% resistors?
@betajet: If you try comparing floatingpoint numbers using fixedpoint compare instructions you deserve what you get :)
LOL
Re: Signed magnitude
betajet
1/7/2014 4:47:25 PM
I believe the main reason for S/M representation is that it simplifies multiplication and division. 2's complement multiplication is a pain and requires more logic (IIRC) and division is hard enough without dealing with signs. With S/M, you just do unsigned multiplication and division and then XOR the sign bits.
2's complement arithmetic is good if you're always adding signed numbers, but if your logic needs to both add and subtract, then you've got to complement one of the operands anyway. I believe floatingpoint hardware does S/M add/sub using one's complement arithmetic  I remember studying the IBM 360/91 paper that shows this in detail. The fun part of one's complement arithmetic is the endaround carry. To do this fast, you need carry lookahead endaround carry logic, which turns out to be highly regular and beautiful.
Regarding +0 and 0: I should think the floatingpoint compare instructions take care of this. If you try comparing floatingpoint numbers using fixedpoint compare instructions you deserve what you get :)
I think the thing that confuses a lot of beginners with regard to floating point is why we bother to do it in the first place.
Let's start with the fact that whatever format we decide to use to represent numbers inside a computer, they all end up being stored as a sequence of 1s and 0s.
The thing is that, given a field of a fixed size, what sort of numbers can you store in it? Let's take the arduino, because that's what I'm playing with at the moment. Consider a variable of type "long"  this consumes 32 bits (4 bytes) and can be used to store integers in the range 2,147,483,648 to 2,147,483,647. These are big numbers and they are stored with absolute precision (so long as we only want integers), but what if we want to represent some value outside this range?
This is where floatingpoint comes in. A floating point value in an Arduino also consumes 32 buts (4 bytes), but using the format you discuss (sign, mantissa, exponent), it can be used to represent values as large as 3.4028235E+38 and as small as 3.4028235E+38. This gives a humongous dynamic range, but at the loss of precision (these values have only 67 decimal digits of precision).
Re: Signed magnitude
tomii
1/7/2014 4:33:16 PM
@Max:
Sorry for typos  answering from cell phone, as I'm travelling...
Yes, I actually do know why. Pos & neg zero are important for completeness. This allows a divide by zero to give a correct answer  that is, pos or neg infinity
754 is a really powerful standard, and it includes lots of really clever stuff to squeeze the most out of the format, but the result is that the floatingpoint library C/C++ code required to perform the math operations is pretty large.
One alternative, as you say, is to "roll your own". I've often toyed with the idea of creating my own 16bit (Half Precision) library (providing reasonable dynamic range with limited precision, which would be applicable to certain applications) with strippeddown functionality so as to reduce the memory footprint ... but there never seems to be enough time ... I wonder if any of the other readers have done this?




4/28/2017 7:32:58 PM
4/28/2017 7:32:58 PM
4/28/2017 6:07:14 PM
4/28/2017 5:58:14 PM
4/28/2017 5:22:38 PM
4/28/2017 4:24:02 PM
4/28/2017 2:53:27 PM
4/28/2017 12:02:12 PM
4/28/2017 11:21:43 AM
4/28/2017 11:06:41 AM

