I don’t know about you, but I typically have a number of "back-burner" projects on the go. Currently I'm playing with creating my own simple binary floating-point format as part of an educational tool I'm working on (on the off-chance you're interested, this will be a 3-byte format comprising a 1-bit sign, 7-bit exponent, and 16-bit mantissa).
The tricky thing is that I'm doing all of this at the assembly language level (for use on a cunning virtual microprocessor I've developed with a friend), but I've run into a bit of a problem. They always say that you don’t realize what you don’t know until you try to explain it to someone else, and they (who ever "they" are) are right.
So, here's the deal: I understand how to implement assembly-level versions of math operations like addition, subtraction, multiplication, and division in binary floating-point. The problem I've run into is how to take decimal numbers like 3,142,000,000 or 0.000003142 and convert them into their binary floating-point counterparts. Similarly, how does one go about taking a binary floating-point representation and converting it back into its decimal counterpart for display.
Be warned, this isn’t as easy as you might think. Any ideas? (If so, don't hesitate to email me at firstname.lastname@example.org).