# Dynamically-reconfigurable ECAs - Part 5 (Student Project #3)

See also:

– Part 1 (Architecture)

– Part 2 (Programming Model)

– Part 3 (Student Project #1 – FIR Filter)

– Part 4 (Student Project #2 – ZigBee Receiver)

**Editor's Note:*** ASICs, FPGAs, CPUs/DSPs, and SoCs have been joined by a new kid on the block – the Elemental Computing Array (ECA) from *Element CXI*. In *Part 1* of this mini-series we introduced the ECA Architecture; in *Part 2* we considered the programming model for these devices.
*

*
At that time we said that we would be presenting a number of real-world ECA-based projects, which have been implemented by students of Dr. Peter Athanas of the Department of Electrical and Computer Engineering at Virginia Tech in his Masters/PhD class on Configurable Computing. The interesting thing is that – within a couple of weeks of receiving the ECA design software and development boards – and with minimal training – these students managed to get a variety of projects up-and-running.
*

*
Thus, in *Part 3* we presented a simple ECA-based FIR Filter design implemented by Abhranil Maiti and in *Part 4* we offered an ECA-based ZigBee Receiver design by Chen Zhang. Now, in Part 5, we present a simple image processor implemented by Alexander R. Marschner.*

**Introduction**

Element CXI has been introduced as the newest offering in the field of configurable computing, claiming the capacity for performing both parallel computations as well as distributed sequential computations. Using a highly-scalable array of "Elements" that perform different classes of "elemental" operations, Element CXI's Elemental Computing Array (ECA) promises to deliver faster reconfiguration times than an FPGA, less power and area use than an ASIC, and a library-based graphical design tool that facilitates fast production of complex designs.

However the as-yet unanswered question is whether, from an application designer's perspective, the Element CXI platform can measure up to its marketing. This elemental architecture may have advantages over other reconfigurable platforms, but is it simple for a designer to take an application formerly implemented in an FPGA, for example, and transfer that application over to the Element CXI platform?

In this article, the first stage of an image processing algorithm is introduced in two identical implementations. The first implementation is a gate level implementation that has been proved on an FPGA platform. The second implementation uses the Element CXI tool set, from design to simulation, in order to explore this new design paradigm.

**Object identification algorithm**

The object identification algorithm is the first stage in an object tracking algorithm developed for graduate research performed in Virginia Tech's Configurable Computing Machines (CCM) Lab. It is a simple video processing algorithm that takes a real-time video stream and compares the constituent pixels against the color of a target object and determines which of the pixels in the video stream are sufficiently below a target threshold to be considered part of the target object.

*1. Gate-level implementation of the object identification algorithm.*

(Click this image to view a larger, more detailed version)

(Click this image to view a larger, more detailed version)

*Fig 1* shows the diagram of a gate-level implementation of the object identification algorithm that has been realized in an FPGA. There are four stages in this algorithm, and operations that share the same row are considered to be executing in the same stage. As this is a pipelined operation, all operations are performed each cycle, with the incoming video pixel data moving through the complete algorithm. Preceding the stages in *Fig 1* are blocks representing the required inputs.

**Stage 1**requires six 8-bit inputs. These six inputs are really two groups of three: the 24-bit (RGB) video pixel, and a 24-bit desired color value. Stage 1 performs a subtraction of each color channel, the absolute value of which is the distance between each video pixel component color value and the desired value for that color.

**Stage 2**squares the result of the subtraction in Stage 1, thereby eliminating the sign of the value.

**Stage 3**sums the squares from Stage 2. The resulting value from these first three stages is equivalent to the distance squared term on the left of*Equation 1*(see below).

**Stage 4**compares the distance squared term from Stage 3 to a threshold value specified by the user. If the distance squared term is less than the threshold then the output of the object identification algorithm is logic '1', indicating that the current pixel is part of the target object. If the distance squared term is greater than the threshold then the output of the algorithm is logic '0', indicating that the current pixel is part of the background.

*Equation 1. Distance equation realized by algorithm stages 1-3.*

(Click this image to view a larger, more detailed version)

(Click this image to view a larger, more detailed version)

Ordinarily the output of the object identification algorithm is fed into the next stage of the object tracking algorithm; however, since only this first piece of the algorithm is implemented on the Element CXI platform, the results of a hardware simulation were captured to images files. Figure 2 shows the result of applying this algorithm to a single video frame. The image on the top is the original frame, and the image on the bottom is the result. Although there are small amounts of noise in the resulting image, the algorithm clearly detected the green insect.

*2. Results from the gate-level implementation of the object identification algorithm.*