If you ask engineers how numbers are represented, stored and manipulated in computers and calculators, most will reply "as signed or unsigned binary integers" or "as fixed-point or floating-point binary values." Inquire about binary coded decimal (BCD) representations, and the response is almost invariably, "That went out of style 25 to 30 years ago; no one uses it now."
But that is not entirely true. For example, every pocket calculator in the world is based on some form of decimal arithmetic, not binary. Binary floating-point values can only approximate common decimal numbers. A value of 0.1, for example, requires an infinitely recurring binary pattern of zeros and ones. When the average user performs a calculation such as 0.1 + 0.9, he or she expects to be presented with a result like 1.0. The user would find it very disconcerting to be presented with an answer of 0.999999.
Further, the majority of the world's commercial and financial data is stored and manipulated in decimal form. In many cases, the law requires that results generated from financial calculations performed on a computer exactly match those carried out using pencil and paper. That can only be achieved if the calculations are executed in decimal.
Hardware support for decimal operations, however, has been limited, to say the least. But all that is set to change. The new IEEE 754r standard defines a single data type that can be used for integer, fixed-point and floating-point decimal arithmetic.
As a starting point, the decimal-encoded formats and arithmetic described in the standard will be shipped in IBM's Power6 processor. But the most interesting arena may well be the creation of FPGA-based decimal arithmetic coprocessors to serve the financial and commercial markets.
FPGAs go Wall Street
To speed engineering and scientific calculations, today's computers include high-performance, floating-point coprocessors. By comparison, there is little in the way of hardware assist for financial applications that perform operations on data stored in decimal form. General-purpose processors, such as those from AMD and Intel, provide the ability to add and subtract values stored in decimal format, but that's as far as they go. More-complex operations like multiplication and division must be constructed from the ground up using shifts, addition and subtraction.
What's more, these decimal-related instructions are typically available only at the assembly language level. Financial application developers must construct (or purchase) a library of math functions that make these assembly-level instruction calls. Not surprisingly, the resulting applications do not perform their calculations as fast as one might hope.
Over the past few years, the term "computational" has been applied to a wide range of engineering and scientific disciplines. Computational chemistry, for example, refers to the use of
computer-based modeling to calculate the structures and properties of molecules and solids. Similarly, there are disciplines such as computational biology, computational physics, computational mech- anics, computational geoscience--the list goes on.
All of these fields involve creating sophisticated algorithms to model and predict the behavior of complex systems. And they all involve a tremendous expenditure of computational power.
One solution is to use a supercomputer; another is to create "computer farms" involving tens, hundreds or even thousands of general-purpose computing engines. But both of those options can be phenomenally expensive.