Breaking News
Comments
Max The Magnificent
User Rank
Blogger
Signed magnitude
Max The Magnificent   1/7/2014 4:24:53 PM
NO RATINGS
Hi Tomii -- as you point out, the sign-bit in the IEEE 754 is just 0 = positive, 1 = negative; that is, the combination of the sign bit and the mantissa is a sign-magnitude vale as opposed to being a two's complement value. One problem with thsi is that you can have both +0 and -0 values. Have you any idea why they chose to do things this way?

tom-ii
User Rank
Blogger
Re: Signed magnitude
tom-ii   1/7/2014 4:33:16 PM
NO RATINGS
@Max: Sorry for typos - answering from cell phone, as I'm travelling... Yes, I actually do know why. Pos & neg zero are important for completeness. This allows a divide by zero to give a correct answer - that is, pos or neg infinity

betajet
User Rank
CEO
Re: Signed magnitude
betajet   1/7/2014 4:47:25 PM
NO RATINGS
I believe the main reason for S/M representation is that it simplifies multiplication and division.  2's complement multiplication is a pain and requires more logic (IIRC) and division is hard enough without dealing with signs.  With S/M, you just do unsigned multiplication and division and then XOR the sign bits.

2's complement arithmetic is good if you're always adding signed numbers, but if your logic needs to both add and subtract, then you've got to complement one of the operands anyway.  I believe floating-point hardware does S/M add/sub using one's complement arithmetic -- I remember studying the IBM 360/91 paper that shows this in detail.  The fun part of one's complement arithmetic is the end-around carry.  To do this fast, you need carry look-ahead end-around carry logic, which turns out to be highly regular and beautiful.

Regarding +0 and -0: I should think the floating-point compare instructions take care of this.  If you try comparing floating-point numbers using fixed-point compare instructions you deserve what you get :-)

Max The Magnificent
User Rank
Blogger
Re: Signed magnitude
Max The Magnificent   1/7/2014 4:52:30 PM
NO RATINGS
@betajet: If you try comparing floating-point numbers using fixed-point compare instructions you deserve what you get :-)

LOL

tom-ii
User Rank
Blogger
Re: Signed magnitude
tom-ii   1/8/2014 8:37:33 AM
NO RATINGS
@Betajet:

I believe the main reason for S/M representation is that it simplifies multiplication and division.  2's complement multiplication is a pain and requires more logic (IIRC) and division is hard enough without dealing with signs.  With S/M, you just do unsigned multiplication and division and then XOR the sign bits.

 

Shhh!  Spoilers!

Max The Magnificent
User Rank
Blogger
754 is great, but...
Max The Magnificent   1/7/2014 4:30:56 PM
NO RATINGS
754 is a really powerful standard, and it includes lots of really clever stuff to squeeze the most out of the format, but the result is that the floating-point library C/C++ code required to perform the math operations is pretty large.

One alternative, as you say, is to "roll your own". I've often toyed with the idea of creating my own 16-bit (Half Precision) library (providing reasonable dynamic range with limited precision, which would be applicable to certain applications) with stripped-down functionality so as to reduce the memory footprint ... but there never seems to be enough time ... I wonder if any of the other readers have done this?

anon5532556
User Rank
Rookie
Re: 754 is great, but...
anon5532556   1/8/2014 3:22:44 PM
NO RATINGS
Speaking of home built libs, I made some rough 16 bit floating point stuff a long time ago during the dinosaur micros. Also a couple of crude 12 and 8 bit versions. Don't laugh, it was sometimes kinda useful for working with curves like audio gains etc. And less bits made certain lookup tables possible, giving mathless math and fast calcs to a slow bit banger if you had a bodacious eprom etc.

Nice write up btw :)

PS: Anybody else use the AMD 9511 or 9512?

TanjB
User Rank
Rookie
Re: 754 is great, but...
TanjB   1/11/2014 2:39:00 AM
NO RATINGS
Those were AMD bit-slice micros?  I used the Intel 300x and the AMD 291x, which were 2 and 4 bit slices.

16 bit FP is making a comeback.  You can find it supported in some current GPUs.  I believe it is used mostly to represent high dynamic range graphical data but there are probably other uses.

Of course, 8 bit FP was actually hugely important.  The A-law and mu-law codecs used by all phone networks in the ISDN days, still used in some landlines and voice exchanges, were essentially FP with a sign, 3 bit exponent, and 4 bit fraction (with implied leftmost 1, just like IEEE formats).

anon5532556
User Rank
Rookie
Re: 754 is great, but...
anon5532556   1/13/2014 3:11:34 PM
NO RATINGS
Not bit slice. T did use the AMD 2901 and 29116 for a while though.

The 9511 was a 32 bit floating point chip with 8 bit bus. Easy to hook to a Z80 etc. It directly handled a bunch of curve type functions and the like. Mostly used it for sin/cos/tan things doing earth curvature work. The 9512 was much simpler but wider inside. Both ran hot, and cost a lot.

Oddly enough it looks like MicroMega currently sells an FPU for microcontroller projects. That has to be going away though, as the ARM 32F4 part I'm using today does floating point so fast I regularly use it in interrupt routines.

Max The Magnificent
User Rank
Blogger
When you have limited bits to play with...
Max The Magnificent   1/7/2014 4:41:04 PM
NO RATINGS
I think the thing that confuses a lot of beginners with regard to floating point is why we bother to do it in the first place.

Let's start with the fact that whatever format we decide to use to represent numbers inside a computer, they all end up being stored as a sequence of 1s and 0s.

The thing is that, given a field of a fixed size, what sort of numbers can you store in it? Let's take the arduino, because that's what I'm playing with at the moment. Consider a variable of type "long" -- this consumes 32 bits (4 bytes) and can be used to store integers in the range -2,147,483,648 to 2,147,483,647. These are big numbers and they are stored with absolute precision (so long as we only want integers), but what if we want to represent some value outside this range?

This is where floating-point comes in. A floating point value in an Arduino also consumes 32 buts (4 bytes), but using the format you discuss (sign, mantissa, exponent), it can be used to represent values as large as 3.4028235E+38 and as small as -3.4028235E+38. This gives a humongous dynamic range, but at the loss of precision (these values have only 6-7 decimal digits of precision).

 

betajet
User Rank
CEO
Re: When you have limited bits to play with...
betajet   1/7/2014 4:57:00 PM
NO RATINGS
As Tom points out, floating-point is similar to scientific notation.  Indeed, one of its main uses historically has been for scientific number crunching.  In science, you're always working with numbers you have measured with limited accuracy, so there's limited value in performing calculations more precisely than the figures going into them.  What's the point of calculating the resistance of a resistor network to 20 digits of precision when they're at best 1% resistors?

cpetras
User Rank
Rookie
Decimal floating point...
cpetras   1/7/2014 5:04:49 PM
NO RATINGS
The IEEE 754 standard (2008) has introduced decimal floating point.  http://en.wikipedia.org/wiki/IEEE_754-2008 http://en.wikipedia.org/wiki/Decimal_floating_point htttp://www.cl.cam.ac.uk/~jrh13/papers/decimal.pdf  Which is really great if you need to do anything financial.

Max The Magnificent
User Rank
Blogger
Re: Decimal floating point...
Max The Magnificent   1/8/2014 9:52:35 AM
NO RATINGS
@cpetras: The IEEE 754 standard (2008) has introduced decimal floating point.

Didn't it actually introduce multi-radix floating-point, of which decimal is one incarnation, or was decimal singled out?

TanjB
User Rank
Rookie
Re: Decimal floating point...
TanjB   1/8/2014 11:51:31 AM
NO RATINGS
Yep.  And in practice decimal FP is not ideal for financial calculations.

FP calculations (in any radix) are common in engineering, science, and anything approximate.  Even in finance they are perfectly fine to use in situations like estimating future or present value, or allocating budgets.

When it comes to accounting for the cents, however, fixed point is more likely what you want.  Most of those operations are multiplies, adds and subtracts, which are exact in fixed point, with the occasional fraction like taxes which have rounding rules built in.

And as for the need for precision in a world where resistors might be accurate only to a percent, it is amazing how easy it is to get yourself into trouble with the math once you start doing simulations and (much, much trickier) optimizations.  Simple components like transformers are nearly singularities.  Numerical optimization packages are black arts mostly because of the clever tweaks needed to efficiently detect and work around problems with the limited (!) precision of 64 bit doubles.

Max The Magnificent
User Rank
Blogger
Re: Decimal floating point...
Max The Magnificent   1/8/2014 12:01:49 PM
NO RATINGS
@TanjB: ...as for the need for precision in a world where resistors might be accurate only to a percent, it is amazing how easy it is to get yourself into trouble with the math once you start doing simulations and...

VERY good point!!!

betajet
User Rank
CEO
Re: Decimal floating point...
betajet   1/8/2014 2:07:22 PM
NO RATINGS
If your calculations are becoming unstable even with double precision, it's time to step back and do a proper numerical analysis of your problem.  Here's what too many people forget:  floating-point numbers are not real numbers, so the normal laws of real numbers -- like associativity of addition -- do not apply.  When you add a tiny floating-point number X to a big floating-point number Y, all the bits of X fall into the bit bucket and you end up with Y, not X+Y.  Sometimes you need to use algebraic tricks to re-write your formulas into expressions that are stable for your problem and hope the compiler doesn't "optimize" them.

I've read that John von Neumann greatly disliked floating-point because (1) he'd rather use those exponent bits for more precision, and (2) once you've done your numerical analysis you've already completed most of the work needed to represent your problem using fixed-point arithmetic.

Max The Magnificent
User Rank
Blogger
Re: Decimal floating point...
Max The Magnificent   1/8/2014 3:39:43 PM
NO RATINGS
@betajet: ...once you've done your numerical analysis you've already completed most of the work needed to represent your problem using fixed-point arithmetic.

LOL  I think the main thing is to understand what one is trying to do and take the expected data and application into account. As you note, if you perform Y + X where Y is a very big value and X is a very small one, you will end up with just Y .... but if X and Y are both in the same ball-park size-wise, then the problem is much reduced.

TanjB
User Rank
Rookie
Re: Decimal floating point...
TanjB   1/11/2014 2:32:48 AM
NO RATINGS
BetaJet, well yeah it would be nice to have a "proper analysis" but modern optimization software handles problems so huge (matrix dimensions millions of rows and columns) that no-one really has a proper theory of what happens.  As you say, FP is actually a set of fractional approximations and there are situations where severe loss of precision can occur.  Observation suggests there are real world reasons why actual optimization problems routinely come close to singularity.  In practice all commercial packages have black art tweaks to detect and recover.

One of my colleagues wrote an infinite (ulimited rationals) precision arithmetic package and we used that to get some insights and to check what the true optimal solutions were for some test cases.  It was educational but too slow for real world use.

The field has changed enormously since JvN's time.  Heck I think he died in that car crash before Simplex even became widespread.  Numerical optimization theory blossomed in the 1980s with real insights into non-linear, and then the implementations accelerated enormously in the 1990s and 2000s.  Only the square root of the improvement due to hardware, the rest due to clever algorithms.  I'm sure that John would love the kinds of optimization which we do today for monster problems like deep neural networks but it is a hugely different field than what he helped start.

azassoko
User Rank
Rookie
Erroneous example
azassoko   1/8/2014 8:11:23 AM
NO RATINGS
Tom,

Sorry to disappoint you, but 101.011 = 5.375, not 5.15. 

Sic transit gloria mundi...

AZ

tom-ii
User Rank
Blogger
Re: Erroneous example
tom-ii   1/8/2014 8:43:43 AM
NO RATINGS
@azassoko:

 

Sorry to disappoint you, but 101.011 = 5.375, not 5.15.

 

D'oh!  Would you believe I'm no good at math?

 

Sic transit gloria mundi

 

And if this is how the world progresses, then we're in trouble...  Oh, wait...

Max The Magnificent
User Rank
Blogger
Re: Erroneous example
Max The Magnificent   1/8/2014 9:58:37 AM
NO RATINGS
@Azassoko: Sic transit gloria mundi

"Obesa cantavit" (The fat lady has sung :-)



EE Life
Frankenstein's Fix, Teardowns, Sideshows, Design Contests, Reader Content & More
Max Maxfield

What's the Best Traveling Toolkit?
Max Maxfield
12 comments
A few years ago at a family Christmas party, I won a pocket knife as part of a "Dirty Santa" game. This little scamp was a Buck 730 X-Tract. In addition to an incredibly strong and sharp ...

Rishabh N. Mahajani, High School Senior and Future Engineer

Future Engineers: Don’t 'Trip Up' on Your College Road Trip
Rishabh N. Mahajani, High School Senior and Future Engineer
9 comments
A future engineer shares his impressions of a recent tour of top schools and offers advice on making the most of the time-honored tradition of the college road trip.

Larry Desjardin

Engineers Should Study Finance: 5 Reasons Why
Larry Desjardin
41 comments
I'm a big proponent of engineers learning financial basics. Why? Because engineers are making decisions all the time, in multiple ways. Having a good financial understanding guides these ...

Karen Field

July Cartoon Caption Contest: Let's Talk Some Trash
Karen Field
157 comments
Steve Jobs allegedly got his start by dumpster diving with the Computer Club at Homestead High in the early 1970s.

Top Comments of the Week
Flash Poll
Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)