# Humans still thrive on decimals

When it comes to representing, storing and manipulating numbers in computers, most engineers use some form of binary (base-2) representation. This is especially true when performing engineering or scientific calculations, in which case binary floating-point representations adhering to IEEE 754 are the norm. Binary floating-point representations have the advantage of being very efficient in terms of storage requirements. Furthermore, calculations performed on these representations are usually assisted by special binary floating-point coprocessing units, which do the job quickly and efficiently.

But binary floating-point representations can only approximate many common decimal values. For example, one- tenth, represented as 0.1 in decimal, requires an infinitely recurring series of binary digits, which can cause problems to surface when you least expect them. On the extremely useful Web site at *www2. hursley.ibm.com/decimal*, IBM fellow Mike Cowlishaw notes a variety of such artifacts. For instance, when using the C or Java "double" data type, performing the operation 0.1 x 8 returns the result 0.80000 00000000000444089209850062616169 452667236328125. By comparison, 0.1 added to itself eight times yields 0.79 99999999999999333866185224906075 7458209991455078125. If these values are multiplied by 10 and rounded using the "floor" rounding function, the result is 8 in one case and 7 in the other.

Even a relatively simple (and single) calculation can return unexpected results. For example, evaluating a 5 percent sales tax on an item such as a 70-cent phone call in decimal returns $0.70 x 1.05 = $0.735, which rounds up to $0.74. In binary floating point using the double-data type, the result is 0.7349999999999999866773 2370449812151491641998291015625, which rounds down to $0.73. This can become significant over the course of multiple such operations, which is why the law requires that many financial calculations be done using decimal representations.

**BCD form**

When working in decimal, values are traditionally stored in binary-coded decimal (BCD) form, in which six out of 16 binary patterns remain unused for each 4-bit field. That means BCD values require roughly 20 percent more storage than a pure binary value. With BCD, it isn't necessary to do convoluted conversions from the decimal to the binary realm and vice versa. If you think that is easy, consider the value 1.75 in decimal, or 1.11 in binary. How would you convert the former into the latter? (Hint: Starting by multiplying 1.75 by 100 to give 175--which equates to 10101111 in binary--is not the way to go.)

The disadvantage to performing calculations in decimal (in software) is that they can take 100 to 1,000 times longer than the equivalent calculations done in binary floating point (in hardware). However, if hardware implementations of decimal-based math operations were realized--in an FPGA, for example--then decimal calculations could be performed almost as fast as binary at the cost of only approximately 15 percent more gates.

Will decimal-based computer arithmetic replace binary floating point? Probably not in my lifetime. But there is increasing interest in the acceleration of decimal calculations, and it will be interesting to see how things progress.

*Clive (Max) Maxfield, Editor of Programmable
Logic DesignLine and author of "How Computers Do
Math" (ISBN: 0471732788, www.DIYCalculator.com)
.*