Max could use an existing protocol like I2C to communicate between two devices, so why would he want to create his own?
SPI, which dates as far back as 1979, is a single-master protocol, which means that one central (master) device is in charge of initiating all communication with one or more slaves. A clock signal called SCLK is sent from the master to all of the slaves; a common data signal called MOSI (master-out, slave-in) is used to communicate data from the master to all of the slaves; a second common data signal called MISO (master-in, slave-out) is used to communicate data from all of the slaves to the master; and then there are separate SS (slave-select) signals for each of the slaves, thereby allowing the master to specify with which slave it wishes to communicate.
I2C, which was developed in 1982, is a multi-master protocol that requires only two signal lines -- SCL (serial clock) and SDA (serial data) -- both of which are bidirectional. In this case, each device connected to the I2C bus has its own unique address. Whichever device initiates a data transfer on the bus is considered to be the master at that time, while all of the others become slaves. The master signals that a communication is about to begin, then it transmits the address of the slave with which it wishes to communicate. Following an acknowledgement from the slave, the master starts to transmit or receive data. The underlying mechanism is quite "interesting," but -- as users -- we don’t have to worry about it, because everything is handled in hardware and/or software.
I've been using I2C quite a lot recently in my Inamorata Prognostication Engine project. I use it to communicate among my Arduino and my RTC (real time clock), two motor controller shields, and my RGB LCD shield, all of which came from those little scamps at Adafruit.com.
Having said this, all of my usage thus far has involved an intelligent master (my Arduino) communicating with relatively dumb slaves. Things are a little different in the case of my BADASS display, in which both of the devices are intelligent. On the one hand, we have the Arduino working on its cunning display effects and driving the NeoPixel strips; on the other hand, we have the other device sampling the audio stream and extracting the spectrum data. Both of these activities can be implemented asynchronously to each other.
One thing I could do is to make the other device the master. Every time it completes a cycle of taking a sample and performing its DSP magic, it could transmit this data to the Arduino. The problem would be if this communication interrupts the Arduino when it's in the middle of writing to the NeoPixel strips. The timing of these strips is a tad temperamental, so any interruption could result in undesirable artifacts on the display.
Alternatively, I could make the Arduino the master. Every time it completes an update of the display, it could send a request for new data to the other device, but then I run the risk of interrupting that little rascal in the middle of its cogitations and calculations.
Yet another option is to create a custom interface as illustrated below. The downside to this is that it consumes 13 pins, but -- as I mentioned earlier -- in the case of this project I have "pins to burn." The upsides to creating my own protocol are that it's incredibly simple, it's computationally lightweight, and it does exactly what I want it to do.
Let's walk through this step-by-step as illustrated in the waveform diagram below. When my Arduino finishes updating the display from the current cycle, it will place its "yo" output signal in its active (low) state (1), at which point it will sit there waiting for something to happen.
Meanwhile, let's assume that the other device has taken a sample from the audio stream and is currently performing its DSP magic. The result of this will be to store the spectrum data into 16 "buckets" (numbered from 0000 to 1111 in binary) each of which will contain a value representing the current peak amplitude for that "bucket." These amplitude values map onto the columns in the display, each of which contains 16 pixels. Thus, the amplitude values will range from 00000 in binary, meaning no frequency component, to 10000 in binary, representing the maximum amplitude or top-most LED.
As soon as the other device has completed its current calculations, it takes a look at the "yo" signal coming from the Arduino. If this signal is in its inactive (high) state, then the other device will simply take a new sample from the audio stream and start a new round of DSP calculations. Alternatively, if the "yo" signal is in its active state, the other device will respond by placing its "what" signal in its active (low) state (2).
Next, the Arduino sets up the address (0000 to 1111) of the bucket in which it is interested on its 4-bit "that[3:0]" bus (3), after which it places its "gimmie" signal in its active (low) state (4). When the other device sees the "gimmie" signal go active, it responds by taking the data from the specified "bucket" and presenting it on its 5-bit "this[4:0]" bus (5), after which it places its "take" signal in its active (low) state (6).
When the Arduino sees the "take" signal go active, it knows it can read the data value from the "this[4:0]" bus. Once it's read this data, it places its "gimmie" signal in its inactive state (7). When the other device sees the "gimmie" signal go inactive, it returns its "take" signal into its inactive state (8), after which we don't care what's on the "this[4:0]" bus (9).
The Arduino then sets the address of the next "bucket" of interest on its "that[3:0]" bus (10), returns it's "gimmie" signal in its active state (11), and off we go again. Once the Arduino has gained access to the spectrum data associated with all 16 "buckets," it places it's "yo" signal in its inactive state (12), after which it starts to update the main display. Meanwhile, as soon as the other device sees the "yo" signal go inactive, it responds by setting its "what" signal into its inactive state (13), after which it goes off to take a new sample from the audio stream and start a new round of DSP calculations.
After chatting to a number of other engineers, my impression is that creating one's own custom interface for this sort of thing is a lot more common than one might think. The advantage with regard to something as simple as my interface is that it's easy to understand, it has a small memory footprint, it can run at a really high speed, and I absolutely know what's happening.
So, that's what I'm thinking at the moment. What do you think? Is a custom interface the way to go -- (have you created one yourself?) -- or would you always try to stick with a standard protocol?
— Max Maxfield, Editor of All Things Fun & Interesting