I believe the main reason for S/M representation is that it simplifies multiplication and division. 2's complement multiplication is a pain and requires more logic (IIRC) and division is hard enough without dealing with signs. With S/M, you just do unsigned multiplication and division and then XOR the sign bits.

The IEEE 754 standard (2008) has introduced decimal floating point. http://en.wikipedia.org/wiki/IEEE_754-2008 http://en.wikipedia.org/wiki/Decimal_floating_point htttp://www.cl.cam.ac.uk/~jrh13/papers/decimal.pdf Which is really great if you need to do anything financial.

As Tom points out, floating-point is similar to scientific notation. Indeed, one of its main uses historically has been for scientific number crunching. In science, you're always working with numbers you have measured with limited accuracy, so there's limited value in performing calculations more precisely than the figures going into them. What's the point of calculating the resistance of a resistor network to 20 digits of precision when they're at best 1% resistors?

I believe the main reason for S/M representation is that it simplifies multiplication and division. 2's complement multiplication is a pain and requires more logic (IIRC) and division is hard enough without dealing with signs. With S/M, you just do unsigned multiplication and division and then XOR the sign bits.

2's complement arithmetic is good if you're always adding signed numbers, but if your logic needs to both add and subtract, then you've got to complement one of the operands anyway. I believe floating-point hardware does S/M add/sub using one's complement arithmetic -- I remember studying the IBM 360/91 paper that shows this in detail. The fun part of one's complement arithmetic is the end-around carry. To do this fast, you need carry look-ahead end-around carry logic, which turns out to be highly regular and beautiful.

Regarding +0 and -0: I should think the floating-point compare instructions take care of this. If you try comparing floating-point numbers using fixed-point compare instructions you deserve what you get :-)

I think the thing that confuses a lot of beginners with regard to floating point is why we bother to do it in the first place.

Let's start with the fact that whatever format we decide to use to represent numbers inside a computer, they all end up being stored as a sequence of 1s and 0s.

The thing is that, given a field of a fixed size, what sort of numbers can you store in it? Let's take the arduino, because that's what I'm playing with at the moment. Consider a variable of type "long" -- this consumes 32 bits (4 bytes) and can be used to store integers in the range -2,147,483,648 to 2,147,483,647. These are big numbers and they are stored with absolute precision (so long as we only want integers), but what if we want to represent some value outside this range?

This is where floating-point comes in. A floating point value in an Arduino also consumes 32 buts (4 bytes), but using the format you discuss (sign, mantissa, exponent), it can be used to represent values as large as 3.4028235E+38 and as small as -3.4028235E+38. This gives a humongous dynamic range, but at the loss of precision (these values have only 6-7 decimal digits of precision).

@Max:
Sorry for typos - answering from cell phone, as I'm travelling...
Yes, I actually do know why. Pos & neg zero are important for completeness. This allows a divide by zero to give a correct answer - that is, pos or neg infinity

754 is a really powerful standard, and it includes lots of really clever stuff to squeeze the most out of the format, but the result is that the floating-point library C/C++ code required to perform the math operations is pretty large.

One alternative, as you say, is to "roll your own". I've often toyed with the idea of creating my own 16-bit (Half Precision) library (providing reasonable dynamic range with limited precision, which would be applicable to certain applications) with stripped-down functionality so as to reduce the memory footprint ... but there never seems to be enough time ... I wonder if any of the other readers have done this?

A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...

Leonard Nimoy, We'll Miss you Martin Rowe5 comments Like many of you, I was saddened to hear the news of Leonard Nimoy's death. His Star Trek character Mr. Spock was an inspiration to many of us who entered technical fields.

Making the Grade in Industrial Design Rich Quinnell16 comments As every developer knows, there are the paper specifications for a product design, and then there are the real requirements. The paper specs are dry, bland, and rigidly numeric, making ...

To save this item to your list of favorite EE Times content so you can find it later in your Profile page, click the "Save It" button next to the item.

If you found this interesting or useful, please use the links to the services below to share it with other readers. You will need a free account with each service to share an item via that service.