# Binary Coded Decimal (BCD) 101 - Part 1

**Editor's Note:** This is the first article in a two-part series on decimal representations and decimal arithmetic in general, and on *Binary Coded Decimal (BCD)* in particular. In this first installment, we consider why decimal representations and arithmetic are of interest to today's FPGA designers. In Part 2 (coming next week) we will plunge headfirst into the nitty-gritty details of using BCD (and related decimal codes) to represent – and perform calculations on – unsigned integers, signed (ten's complement) integers, and fixed- and floating-point values.

If you ask engineers how numbers are represented, stored, and manipulated in computers and calculators, most will reply: *"as signed or unsigned binary integers or as fixed-point or floating-point binary values"* (before you send me an email, I know that binary integers are a special case of a fixed-point representation). And even if one should happen to enquire about Binary Coded Decimal (BCD) representations, the response is almost invariably: *"Oh, that went out of style 25 to 30 years ago; no one uses it now."*

However, this turns out not to be strictly true. Did you know, for example, that every pocket calculator in the world is based on some form of decimal arithmetic – not binary? This is because binary floating-point values can only approximate common decimal numbers. A value of 0.1 (one tenth), for example, requires an infinitely recurring binary pattern of 0s and 1s. When the average user performs a calculation such as 0.1 + 0.9, they expect to be presented with a result like 1.0, and they would find it very disconcerting to be presented with an answer like 0.999999.

Furthermore, the majority of the world's commercial and financial data is stored and manipulated in decimal form. In many cases, there are legal requirements that the results generated from financial calculations performed on a computer exactly match those carried out using pencil and paper, and this can only be achieved if the calculations are executed in decimal.

Editor's Note:Before you start criticizing my use of the phrase"data is stored"in the preceding paragraph, may I be so bold as to quote myself from How Computers Do Math featuring the pedagogical and phantasmagorical Virtual DIY Calculator (ISBN: 0471732788):

The term "data" is the plural of the Latindatum, meaning "something given". The plural usage is still common, especially among scientists, so it's not unusual to see expressions like:"These data are..."

However, it is becoming increasingly common to use "data" to refer to a single group entity such as information. Thus, an expression like"This data is..."would also be acceptable to a modern audience.

Until recently, however, hardware support for decimal representations and mathematical operations has been limited to say the least. But all that is set to change. The new IEEE 754r Standard defines a single data type that can be used for integer, fixed-point, and floating-point decimal arithmetic. As a starting point, the decimal-encoded formats and arithmetic described in this standard will be shipped in IBM's Power6 Processor. But the most interesting arena may well be the creation of FPGA-based decimal arithmetic co-processors to serve the financial and commercial markets.

**FPGAs go Wall Street**

In order to speed engineering and scientific calculations, today's computers include high-performance binary floating-point coprocessors. By comparison, there is little in the way of hardware assist for financial applications that perform operations on data stored in decimal form. General-purpose processors – such as those from AMD and Intel – do provide the capability to add and subtract values stored in decimal format, but that's pretty much as far as it goes. More complex operations like multiplication and division have to be constructed from the ground up using shifts, addition, and subtraction.

Furthermore, these decimal-related instructions are typically available only at the assembly language level. Developers of financial applications have to construct (or purchase) a library of math functions that – ultimately – make these assembly-level instruction calls. Not surprisingly, the resulting applications do not perform their calculations as fast as one might hope.

Editor's Note:Actually, the description above is something of a simplification. In fact, the Intel x86 architecture doesn't support decimal add/subtract per se; rather it has adjustments: one adds a pair of BCD digits (held in a byte) using a binary ADD then 'adjusts' the byte to re-encode the result in BCD (with possible carry).Furthermore, the above was the case with regard to the early 8-bit processors. As Intel went to wider words (16-bit, 32-bit, 64-bit) they didn't add corresponding decimal adjust instructions; these instructions still work only on bytes. This means that – in the case of a 32-bit processor, for example – it's faster to process eight packed BCD digits at a time in binary and then use other tricks to perform the decimal adjustments. As a result, it's doubtful that anyone actually uses those adjustment instructions any more, except perhaps in legacy code dating from the 1980s.

Over the last few years, the term "computational" has started to be applied to a wide range of engineering and scientific disciplines. *Computational Chemistry*, for example, refers to the use of computer-based modeling to calculate the structures and properties of molecules and solids. Similarly, there are disciplines such as *Computational Biology*, *Computational Physics*, *Computational Mechanics*, *Computational Geoscience*, and ... the list goes on. Moving a little closer to home, in the case of silicon chip design, for example, the folks at Brion Technologies talk about their solutions for *Computational Lithography*.

All of these fields involve creating sophisticated algorithms to model and predict the behavior of complex systems. Furthermore, they all involve the expenditure of tremendous quantities of computational power. One solution is to use a super computer; another is to create a "computer farms" involving tens, hundreds or even thousands of general-purpose computing engines; but both of these options can be phenomenally expensive.

An alternative is to use a combination of a general-purpose computing engine, which is used to execute control-type tasks, and an FPGA-based computing engine, which is used to perform algorithmic data-processing tasks at extreme speed. There are already a variety of intellectual property (IP) cores available for implementing fixed-point and floating-point hardware accelerators in FPGAs. Thus far, however, little work has been performed (at least, not publicly) with regard to implementing equivalent cores that work with decimal data.

At the time of this writing, *Computational Finance* is a humongous, largely untapped market, but this situation is poised to change... In order for this change to occur, however, several things are required:

- A standard for decimal arithmetic
- An appropriate hardware platform
- A tool chain for application developers
- A decimal arithmetic IP library

On the bright side, most of these elements are now in place...