Breaking News
View Comments: Oldest First | Newest First | Threaded View
Page 1 / 3   >   >>
Garcia-Lasheras
User Rank
Blogger
Alternative implementation
Garcia-Lasheras   10/24/2013 8:07:50 PM
NO RATINGS
As you note, in order to use a constant incremental step or delta, the ideal choice would be using "float". The problem is that this is not very handy for a 8 bits machine.

But there are a lot of problems in using integers for this purpose too, as we cannot deal with decimal values. One of them is that the LED intensity increase/decrease is not going to be linear along the cycles -- it may reach the final value far before the for loop ends.

I propose you a bit more computational intensive solution, but I believe the LED regulation may be nicer:


void lightLEDs (int howMany,
                int howLong)
{
    int tmpColor = oldColor;

    for (int i = 1; i <= howMany; i++)
    {
        if (newColor != oldColor)
        {
            tmpColor = 255 * i;
            tmpColor = tmpColor / howMany;
            
            if (newColor > oldColor)
                tmpColor = 0 + tmpColor;
            else
                tmpColor = 255 - tmpColor;
        }

        // Use current tmpColor to drive LED
        delay(howLong);
    }
}

 

salbayeng
User Rank
Rookie
Re: Alternative implementation
salbayeng   10/25/2013 1:10:56 AM
NO RATINGS
I'm thinking similar to Garcia.

I use the delta addition approach in Servo systems as well as LED drivers.

I call it "phase accumulation" , the idea has been around for years , embodied in DDS chips. 

In your case I would set up an integer "phase accumulator" and then overlay the "led brightness" byte on the high 8 bits. You then add "ramping speed" to "phase accumulator" 

All of the accumulating is done in an interrupt routine, so you can do multiple LED's , and other system timers at once , and can have long fade times. You can also interrupt a fade and fade smoothly off to something else.

--------------------------

Re  the "missing bit"  , yeh you get this issue with fixed point multiply (and divide) .  From memory if you do it in assembler you can keep track of the missing bit, something like this:  (plucked from code I wrote years ago, and the detail forgotton)

 mulsu r23, r20
 SBC r19, _zero
          ADD r17, r0
          ADC r18, r1 
  ADC r19, _zero

I can't remember exactly how this works , this piece is transferring the partial results from a multiply into the result.  Note the SBC r19, _zero , this seems to be pointlessly subtracting zero from a register, but its actually propagating the missing bit. Note the Atmel chips have 6 flavors of multiply

MUL Rd, Rr Multiply Unsigned R1:R0 ← Rd x Rr    
MULS Rd, Rr Multiply Signed R1:R0 ← Rd x Rr      
MULSU Rd, Rr Multiply Signed with Unsigned R1:R0 ← Rd x Rr     
FMUL Rd, Rr Fractional Multiply Unsigned R1:R0 ← (Rd x Rr) << 1     
FMULS Rd, Rr Fractional Multiply Signed R1:R0 ← (Rd x Rr) << 1      
FMULSU Rd, Rr Fractional Multiply Signed with Unsigned R1:R0 ← (Rd x Rr) << 1 

So if you go to the trouble of writing in assembler,and pick and choose the type of MUL you use, you can make 255/256=1 , It's also useful to see whether  "negative zero" can appear in your algorithm and whether it is handled correctly  typically -255/-256 = 1 while 255/256=0 but what about 0/-256 ??

A possible fix to your problem is to use a condition test like <= in one direction and > in the other

Max The Magnificent
User Rank
Blogger
Re: Alternative implementation
Max The Magnificent   10/25/2013 9:58:34 AM
NO RATINGS
@Garcia: Interesting ... I will try both techniques (mine and yours) and report back on the results.

Max The Magnificent
User Rank
Blogger
Re: Alternative implementation
Max The Magnificent   10/25/2013 10:02:01 AM
NO RATINGS
@salbayeng: It's also useful to see whether  "negative zero" can appear in your algorithm

Now you are making my brain ache -- surely one of the main points about the twos complement form of representation is that you cannot get negative zero...


Max The Magnificent
User Rank
Blogger
Re: Alternative implementation
Max The Magnificent   10/25/2013 10:10:12 AM
NO RATINGS
Re the part of your code that says:

 

            if (newColor > oldColor)

                tmpColor = 0 + tmpColor;

            else

                tmpColor = 255 - tmpColor;

 

Isn't the first part of this redundant? Couldn't we just say:

 

            if (newColor < oldColor)

                tmpColor = 255 - tmpColor;

 

Or are we doing it your way to keep the time pretty much consistent for fade-ups and fade-downs by performing roughy the same number/type of calculations each way?

 

Garcia-Lasheras
User Rank
Blogger
Re: Alternative implementation
Garcia-Lasheras   10/25/2013 10:19:34 AM
NO RATINGS
@Max: Yes, you are compltely right -- I was using modColor variable too and then I realized that I didn't need that, so I quickly changed the code.

In addition, about the timing coherence, I was assuming that delay(howLong) dominates the fading, so I was not thinking on exactly matching the number of CPU cycles -- I just love symmetry ;-)

The code could be then:


void lightLEDs (int howMany,
                int howLong)
{
    int tmpColor = oldColor;

    for (int i = 1; i <= howMany; i++)
    {
        if (newColor != oldColor)
        {
            tmpColor = 255 * i;
            tmpColor = tmpColor / howMany;
            
            if (newColor < oldColor)
                tmpColor = 255 - tmpColor;
        }

        // Use current tmpColor to drive LED
        delay(howLong);
    }
}

Wnderer
User Rank
CEO
Maybe just add 1
Wnderer   10/25/2013 10:54:12 AM
NO RATINGS
I might be missing something but why not just add or subtract 1?

 

if (modColor > 0) modColor = (modColor + 1) / howMany

else if (modColor < 0) modColor = (modColor - 1)/howMany

 

if modColor is either 0, 255 or -255 and howMany is always even then you end up with a couple of compares, an increment, a decrement and some bit shifts. If I recall correctly that switch statement will create a jump table, so you're better off with if statements if you're looking for compact assembly code.

 

 

Max The Magnificent
User Rank
Blogger
Re: Maybe just add 1
Max The Magnificent   10/25/2013 2:52:19 PM
NO RATINGS
@Wnderer: I might be missing something but why not just add or subtract 1?

I've not tried this yet, but I think that if I try to fade the LEDs up over 256 increments / levels then this will consume a lot of processing cycles and may be overkill (changes not discernable to the human eye). Even if I transition the color using say 16 steps with 100 milliseconds (0.1 seconds) between each step, this still equates to 1.6 seconds.

My idea of being able to specify the "howMany" increments value and the delay between incremnents will let me experiment a bit. Also, setting the "howMany" parameter to 1 should make the LED immediately turn hard on or hard off. The idea is that I can vary the pace of the display either randomly or in a controlled manner...

I'm having a lot of fun with this.

Max The Magnificent
User Rank
Blogger
Re: Alternative implementation
Max The Magnificent   10/25/2013 2:55:31 PM
NO RATINGS
@Garcia: The more I look at your solution the more I realize just how elegant it is. The part I especially like is the fact that the color value automatically ends up at 0 or 255 without any checking or tweaking (capping). I cannot wait to go home and try your solution thsi evening...


Of course, I'm handling all 64 LEDs each with three color channels, but the way I'm doing it is by means of a plan so cunning we could pin a tail on it and call it a weasel :-)

Alvie
User Rank
Blogger
Fixed-Point arithmetic
Alvie   10/25/2013 3:21:18 PM
NO RATINGS
@Max,


Sometimes using Fixed-Point arithmetic can help with these issues.


The basics behind FixedPoint are quite simple, and easy to implement in most architectures. Let's have a look at your concrete problem (255/256) and see how we could get a somewhat accurate output from it.


The easiest way to implement this would be to use a 10.6 fixed point, where 10 bits are used for the real part (signed number range would be between +511 and -512), and 6 bits for the "fractionary" part - since 6 bits can represent 64 different values, each bit can represent 1/64, 0.01625. This is your resolution in terms of the fractionary part.

The beauty about fixed-point is that most of the arithmetic is done with the integer representation. This includes addition/subtraction and multiplication.

In order to add two 10.6 fixed-point values, represented by a 16-bit integer, just... add them. The result will also be a 10.6 fixed point value. Same for subtraction. For multiplication, the result will not be 10.6, but rather 20.12 - 32 bits. In order to convert this multiplication back to 10.6, just shift it by 6 bits, and truncate the result (note: if you're dealing with signed numbers, you must do this in a slightly different way, in order to preserve the highest bit signal).

So, let's take a look at your case: 255/256.

First, let's represent them as fixed-point, 10.6 numbers, in binary
:

   255 => 0011111111.000000

   256 => 0100000000.000000

and:

   1    => 0000000001.000000

In the following explanations, I'll prefix the numbers with "f" if they are representations of fixed-point values, and no prefix for their integer representations.


Now, let's invert f256. This is done by computing (f1/f256), also in fixed-point format. In order to perform this division, we use the integer representations to perform an integer division. But, similarly to the multiplication (which doubles the number of fractionary bits), the division will also change the fractionary bits - not only that, it will indeed remove them.

Let's take a look at a simple (f1/f1) division. This ought to result in f1, of course. If we take a clear look at the f1 representation above, and compute it's integer value, we get 2^6 == 64. If we divide the values using their integer representation, we get 64/64, which gives us 1. But this integer "1" does not represent "f1", it represents "0000000000.000001". So, we just lost a lot of precision (6 bits actually) by performing the division. To fix this, we shift the result by 6 bits to the left, and get the correct value: "0000000001.000000".

Now, another division. Let's divide f3 by f2, the result should be f1.5. The representation of f3 is "0000000011.000000", and f2 is "0000000010.000000". The integer representations are 192 and 128, respectively. If we perform the integer division, we get 1. If we shift this by 6-bits as in previous example, we get "0000000001.000000", which is f1 - not f1.5. So, we just lost all precision !!!

There are a few solutions to overcome this. One of them is to get a few more resolution by extending the numbers to the next representation, which in this case would be 32-bit. Basically, what we will be doing is to transform the equation:

   x  = a / b;

into

   x =  ( (a * N) / b ) / N

which is equivalent. The "N" value would be ((2^6)-1) in this case - which matches the 6-bit shift we have to perform to correct the division result - if we multiply the first argument by the same "shift amount" we have to perform at the end, we don't need to perform the final shift. We can also use a shift to perform the computation of the first operand.

So, for the case of f3/f2, we will get:

a*N (f3 times N): 0000000011.000000[000000] (integer representation: 12288)

b: [000000]0000000001.000000 (integer representation: 128)

And the result is 96 (12288 / 128). In binary form (and fixed point form) it's : [000000]0000000001.100000.

Integer part is 1, and fractionary part is '100000' in binary, 32 in decimal. If we multiply 32 by the fractionary resolution we computed in first place (0.01625), we get... 0.52, close enough to the "0.5" we need to add to the integer part.

Doing the same for your 255/256 values, we get a nicely looking value (result of (255<<6)/256):

 0000000000.111111

We are very close to f1, and we can also truncate the result. But that would not be wise : the result is way more close to f1 than to f0.

So, we want to do rounding before converting that to an integer (which means shifting by 6 to the right). In order to do so, we add a '1' in the MSB of the to-be-discarded part. We are discarding 6 bits, so we have to add '100000' to the result:

 0000000000.111111

+0000000000.100000

= 0000000001.011111.

Shift by 6 to the right.... gives "1".


So, here's your answer, and a quick, dirty and confuse explanation of fixed point arithmetic :)

 

 

Page 1 / 3   >   >>
Flash Poll
Radio
LATEST ARCHIVED BROADCAST
EE Times editor Junko Yoshida grills two executives --Rick Walker, senior product marketing manager for IoT and home automation for CSR, and Jim Reich, CTO and co-founder at Palatehome.
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Top Comments of the Week