Every now and then, a real game-changer comes along. Such an occurrence happened today, when the folks at Altera announced that they have introduced hard-core floating-point DSP blocks in their FPGAs and SoCs (in this context, SoCs refers to FPGAs that also contain hard ARM cortex microcontroller subsystems).
Until now, designers working with FPGAs have been forced to realize their DSP algorithms using fixed-point arithmetic. Now, there are some advantages to working with fixed-point values, but there are a lot of disadvantages also, the main one being that fixed-point values can represent only a limited range of values, which makes fixed-point arithmetic susceptible to a variety of computational inaccuracies.
Similarly, there are some disadvantages when it comes to working with floating-point values, but there are also a lot of advantages, including the fact that they have a much larger dynamic range than their fixed-point cousins.
When it comes to implementing DSP algorithms in FPGAs, designers typically start working at a high level of abstraction -- perhaps using MATLAB or Simulink from MathWorks -- and they also typically start working with floating-point values. Translating these floating-point representations into fixed-point equivalents is a non-trivial task that can bring the strongest amongst us to our knees. It can take a huge amount of time to ensure that the fixed-point signal path can handle the algorithms without overflowing the values or introducing artifacts into the data stream.
In order to get around this, FPGA designers sometimes implement floating-point data paths using a combination of hardened fixed-point multipliers and soft programmable fabric. Altera has a very nice implementation called Fused Datapath that uses extra bits in the mantissa to reduce the amount of normalization and de-normalization operations that have to be performed. Like any other "soft" floating-point implementation, however, these do consume large amounts of programmable fabric resources, burn a lot of power (relatively speaking), and are limited in performance.
With regard to today's announcement, what the folks at Altera have done is really rather clever. They already have a hardened variable-precision fixed-point DSP block that can support standard-precision (18-bit) or high-precision (27-bit) modes. They've now added a third mode that supports IEEE 754-complient single-precision floating-point calculations.
It turns out that this capability is already in Altera's high-performance mid-range 20nm Arria 10 FPGAs and SoCs, which are currently shipping (the little scamps at Altera held this nugget of information back until they were ready to announce it). This means Arria 10 FPGAs and SoCs will be able to offer DSP datapaths operating at 400 to 450MHz providing up to 1.5TFLOPS of single-precision floating-point.
These hardened floating-point DSP blocks are also going to be available in Altera's 14nm Stratix 10 FPGAs and SoCs when they become available in 2015. In this case, Stratix 10 FPGAs and SoCs will be able to offer DSP datapaths providing up to 10TFLOPS of single-precision floating-point.
There are two really important things to note here. First, this capability is not going to be limited to a subset of devices. These hardened floating-point DSP blocks are going to be in every member of the 20nm Arria 10 and 14nm Stratix 10 families. Second, the floating-point DSP blocks are backwards-compatible with existing designs. Users can configure each block to run in any of its three modes (18-bit fixed-point, 27-bit fixed-point, or IEEE-compliant single-precision floating-point).
As I mentioned earlier, this is a real game-changer. The higher performance and lower power consumption provided by hardened floating-point functions targets floating-point applications in all five military domains (air, land, sea, space, and cyber). Similarly, this capability is of tremendous interest in the commercial world for compute and storage applications, including oil and gas (seismic calculations), datacenters (search and analytics), security (facial recognition, artificial intelligence), finance (risk analysis, best price algorithms, real-time heading valuation), research (bioinformatics, quantum chemistry, life sciences), manufacturing and industrial control (mold and flow, fluid dynamics, structural mechanics), and... the list goes on.
Another huge consideration is the time-to-market advantages that ensue from using hardened floating-point DSP blocks.
The original design flow that involved creating the design and verifying the algorithms in floating-point and then translating them to fixed-point was laborious and time-consuming. Implementing "soft" floating point as a mix of hardened multipliers and programmable fabric improved the situation to some extent, but the results were less than ideal in terms of performance and power consumption.
Now, the ability to create the design using floating-point and then directly implement that design in hardened floating-point DSP blocks inside the FPGA promises to dramatically reduce development time and cost (Altera is saying this new flow can save six to 12 months on a complex design).
Of course, we've only touched on the surface of this topic here. For more information, please bounce on over to Altera's website at Altera.com.
— Max Maxfield, Editor of All Things Fun & Interesting