As you note, in order to use a constant incremental step or delta, the ideal choice would be using "float". The problem is that this is not very handy for a 8 bits machine.

But there are a lot of problems in using integers for this purpose too, as we cannot deal with decimal values. One of them is that the LED intensity increase/decrease is not going to be linear along the cycles -- it may reach the final value far before the for loop ends.

I propose you a bit more computational intensive solution, but I believe the LED regulation may be nicer: void lightLEDs (int howMany, int howLong) { int tmpColor = oldColor;

for (int i = 1; i <= howMany; i++) { if (newColor != oldColor) { tmpColor = 255 * i; tmpColor = tmpColor / howMany;

I use the delta addition approach in Servo systems as well as LED drivers.

I call it "phase accumulation" , the idea has been around for years , embodied in DDS chips.

In your case I would set up an integer "phase accumulator" and then overlay the "led brightness" byte on the high 8 bits. You then add "ramping speed" to "phase accumulator"

All of the accumulating is done in an interrupt routine, so you can do multiple LED's , and other system timers at once , and can have long fade times. You can also interrupt a fade and fade smoothly off to something else.

--------------------------

Re the "missing bit" , yeh you get this issue with fixed point multiply (and divide) . From memory if you do it in assembler you can keep track of the missing bit, something like this: (plucked from code I wrote years ago, and the detail forgotton)

I can't remember exactly how this works , this piece is transferring the partial results from a multiply into the result. Note the SBC r19, _zero , this seems to be pointlessly subtracting zero from a register, but its actually propagating the missing bit. Note the Atmel chips have 6 flavors of multiply

MUL Rd, Rr Multiply Unsigned R1:R0 ← Rd x Rr MULS Rd, Rr Multiply Signed R1:R0 ← Rd x Rr MULSU Rd, Rr Multiply Signed with Unsigned R1:R0 ← Rd x Rr FMUL Rd, Rr Fractional Multiply Unsigned R1:R0 ← (Rd x Rr) << 1 FMULS Rd, Rr Fractional Multiply Signed R1:R0 ← (Rd x Rr) << 1 FMULSU Rd, Rr Fractional Multiply Signed with Unsigned R1:R0 ← (Rd x Rr) << 1

So if you go to the trouble of writing in assembler,and pick and choose the type of MUL you use, you can make 255/256=1 , It's also useful to see whether "negative zero" can appear in your algorithm and whether it is handled correctly typically -255/-256 = 1 while 255/256=0 but what about 0/-256 ??

A possible fix to your problem is to use a condition test like <= in one direction and > in the other

@salbayeng: It's also useful to see whether "negative zero" can appear in your algorithm

Now you are making my brain ache -- surely one of the main points about the twos complement form of representation is that you cannot get negative zero...

Isn't the first part of this redundant? Couldn't we just say:

if (newColor < oldColor)

tmpColor = 255 - tmpColor;

Or are we doing it your way to keep the time pretty much consistent for fade-ups and fade-downs by performing roughy the same number/type of calculations each way?

@Max: Yes, you are compltely right -- I was using modColor variable too and then I realized that I didn't need that, so I quickly changed the code.

In addition, about the timing coherence, I was assuming that delay(howLong) dominates the fading, so I was not thinking on exactly matching the number of CPU cycles -- I just love symmetry ;-)

The code could be then: void lightLEDs (int howMany, int howLong) { int tmpColor = oldColor;

for (int i = 1; i <= howMany; i++) { if (newColor != oldColor) { tmpColor = 255 * i; tmpColor = tmpColor / howMany;

if (newColor < oldColor) tmpColor = 255 - tmpColor; }

// Use current tmpColor to drive LED delay(howLong); } }

@Garcia: The more I look at your solution the more I realize just how elegant it is. The part I especially like is the fact that the color value automatically ends up at 0 or 255 without any checking or tweaking (capping). I cannot wait to go home and try your solution thsi evening...

Of course, I'm handling all 64 LEDs each with three color channels, but the way I'm doing it is by means of a plan so cunning we could pin a tail on it and call it a weasel :-)

else if (modColor < 0) modColor = (modColor - 1)/howMany

if modColor is either 0, 255 or -255 and howMany is always even then you end up with a couple of compares, an increment, a decrement and some bit shifts. If I recall correctly that switch statement will create a jump table, so you're better off with if statements if you're looking for compact assembly code.

@Wnderer: I might be missing something but why not just add or subtract 1?

I've not tried this yet, but I think that if I try to fade the LEDs up over 256 increments / levels then this will consume a lot of processing cycles and may be overkill (changes not discernable to the human eye). Even if I transition the color using say 16 steps with 100 milliseconds (0.1 seconds) between each step, this still equates to 1.6 seconds.

My idea of being able to specify the "howMany" increments value and the delay between incremnents will let me experiment a bit. Also, setting the "howMany" parameter to 1 should make the LED immediately turn hard on or hard off. The idea is that I can vary the pace of the display either randomly or in a controlled manner...

Sometimes using Fixed-Point arithmetic can help with these issues.

The basics behind FixedPoint are quite simple, and easy to implement in most architectures. Let's have a look at your concrete problem (255/256) and see how we could get a somewhat accurate output from it.

The easiest way to implement this would be to use a 10.6 fixed point, where 10 bits are used for the real part (signed number range would be between +511 and -512), and 6 bits for the "fractionary" part - since 6 bits can represent 64 different values, each bit can represent 1/64, 0.01625. This is your resolution in terms of the fractionary part.

The beauty about fixed-point is that most of the arithmetic is done with the integer representation. This includes addition/subtraction and multiplication.

In order to add two 10.6 fixed-point values, represented by a 16-bit integer, just... add them. The result will also be a 10.6 fixed point value. Same for subtraction. For multiplication, the result will not be 10.6, but rather 20.12 - 32 bits. In order to convert this multiplication back to 10.6, just shift it by 6 bits, and truncate the result (note: if you're dealing with signed numbers, you must do this in a slightly different way, in order to preserve the highest bit signal).

So, let's take a look at your case: 255/256.

First, let's represent them as fixed-point, 10.6 numbers, in binary :

255 => 0011111111.000000

256 => 0100000000.000000

and:

1 => 0000000001.000000

In the following explanations, I'll prefix the numbers with "f" if they are representations of fixed-point values, and no prefix for their integer representations.

Now, let's invert f256. This is done by computing (f1/f256), also in fixed-point format. In order to perform this division, we use the integer representations to perform an integer division. But, similarly to the multiplication (which doubles the number of fractionary bits), the division will also change the fractionary bits - not only that, it will indeed remove them.

Let's take a look at a simple (f1/f1) division. This ought to result in f1, of course. If we take a clear look at the f1 representation above, and compute it's integer value, we get 2^6 == 64. If we divide the values using their integer representation, we get 64/64, which gives us 1. But this integer "1" does not represent "f1", it represents "0000000000.000001". So, we just lost a lot of precision (6 bits actually) by performing the division. To fix this, we shift the result by 6 bits to the left, and get the correct value: "0000000001.000000".

Now, another division. Let's divide f3 by f2, the result should be f1.5. The representation of f3 is "0000000011.000000", and f2 is "0000000010.000000". The integer representations are 192 and 128, respectively. If we perform the integer division, we get 1. If we shift this by 6-bits as in previous example, we get "0000000001.000000", which is f1 - not f1.5. So, we just lost all precision !!!

There are a few solutions to overcome this. One of them is to get a few more resolution by extending the numbers to the next representation, which in this case would be 32-bit. Basically, what we will be doing is to transform the equation:

x = a / b;

into

x = ( (a * N) / b ) / N

which is equivalent. The "N" value would be ((2^6)-1) in this case - which matches the 6-bit shift we have to perform to correct the division result - if we multiply the first argument by the same "shift amount" we have to perform at the end, we don't need to perform the final shift. We can also use a shift to perform the computation of the first operand.

So, for the case of f3/f2, we will get:

a*N (f3 times N): 0000000011.000000[000000] (integer representation: 12288)

And the result is 96 (12288 / 128). In binary form (and fixed point form) it's : [000000]0000000001.100000.

Integer part is 1, and fractionary part is '100000' in binary, 32 in decimal. If we multiply 32 by the fractionary resolution we computed in first place (0.01625), we get... 0.52, close enough to the "0.5" we need to add to the integer part.

Doing the same for your 255/256 values, we get a nicely looking value (result of (255<<6)/256):

0000000000.111111

We are very close to f1, and we can also truncate the result. But that would not be wise : the result is way more close to f1 than to f0.

So, we want to do rounding before converting that to an integer (which means shifting by 6 to the right). In order to do so, we add a '1' in the MSB of the to-be-discarded part. We are discarding 6 bits, so we have to add '100000' to the result:

0000000000.111111

+0000000000.100000

= 0000000001.011111.

Shift by 6 to the right.... gives "1".

So, here's your answer, and a quick, dirty and confuse explanation of fixed point arithmetic :)

@Alvie: Sometimes using Fixed-Point arithmetic can help with these issues...

It all sounds so simple the way you say it LOL

Actually, you are correct in that I had completely neglected to consider using something like a 10.6 fixed-point representation. For some tasks this would be very appropriate -- but for my current application I think I'll stick with the solution from Javi (Garcia).

If you think of doing this in the time domaine does it become easier? Every so many milliseconds you generate an interrupt to process colors. Check where you are and where you want to be and take a step in the direction you need with a fixed sized step. Would that work ok?

@DrFPGA: If I were using the controller to do something else at the same time, then I would look at using a timer and interrupts as you suggest ... but in this case all I'm doing is controlling the LEDs, so I'm just using the basic Arduine delay() function.

You're in effect drawing a straight line in a discontinous space, the coordinates being time and color value. The standard way to do this efficiently is the Bresenham algorithm. The idea is to incrementally compute the current error between the next integer color value (y coordinate) and the next "ideal" color value, and when this error becomes too large to update the current color value accordingly. Here is the code for the inner loop (excerpt from the wikipedia page) :

for x from x0 to x1
plot(x,y)
error := error + deltaerr
if error ≥ 0.5 then
y := y + 1
error := error - 1.0

Note that there is no integer division in the loop, hence no room for rounding errors. And it's efficient: never more than a comparison and two additions at each iteration.

@pviry750: The standard way to do this efficiently is the Bresenham algorithm...

Cool beans -- I will bounce over to peruse and ponder the Wikipedia page on this -- I'm sure I will be doing a lot more of this in the future -- for example, with regard to my current robot project.

Though the Bresenham algorithm might be appropriate, the initial question was how to cope with the truncating element of the integer division.

The solution is really simple:

result = (dividend + (divisor/2))/divisor or result = (dividend + (divisor>>1))/divisor

This might lead us to the next level of discussion: what if divisor is odd?

You simply have to decide whether you want to round up (x.5 => x+1) or down (x.5 => x). In one of the cases the above equations will read ((divisor+1)/2) resp. ((divisor+1)>>1).

There's always a last philosophical question or, in other words, the freedom of choice :)

Your truncation problem would not be there if you stick with fractions.

Just keep 2 integers for each number numerator and denominator) and rewrite your algorithm using 2 numbers instead of one for all calcluations and do multiplications first and divisions last. It's pretty much the same as Fixed Point arithmetics .

It's probably not the most efficient option computationally, but perhaps the simplest solution would be to use the Arduino language's map() function. It seems to work going up and going down in color value, and ends on exactly the right value, in the right number of steps.

@jpessin: It's probably not the most efficient option computationally, but perhaps the simplest solution would be to use the Arduino language's map() function.

This just goes to show that I really need to spend some time on the Arduino website looking to see what's there. I'm currently ver yhappy with the solution proposed by Javi (Garcia) earlier in these comments, but it's great to know that this (and other) functions are available.

What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.

To save this item to your list of favorite EE Times content so you can find it later in your Profile page, click the "Save It" button next to the item.

If you found this interesting or useful, please use the links to the services below to share it with other readers. You will need a free account with each service to share an item via that service.