As an aside, while I was writing this column, my inventer friend Brian LaGrave dropped by my office with his two sons (Sam and Daniel) bearing gifts -- two large plastic coffee containers -- one loaded with cow manure and the other loaded with horse manure.
Glad to see the progress that is being made on this project. Might we be able to hope that the final product, on the DSP and interface side, might become a kickstarter project? A visually appealing and low-cost audio-spectrum analyzer, of this sort, would certainly come in handy.
However, on the purely cosmetic side of the thing, please be careful. The reason why I'm replying to your first comment is to issue some cautionary advise. I only hope I'm not too late...
In my former occupation as a lab technician I had once worked at an agricultural laboratory where, in addition to analyzing soils and irrigation waters, we also had the occasion to work with such delightful stuffs as you are working with, here. So... if you dilute these substances for ease of application, or not, do NOT place them into any tightly sealed containers. Especially any glass jars or bottles.
On one occasion, our lab had received the waste-water run-off from a local dairy facility. It was sampled into a large sealed glass jug in which it sat waiting for several hours before processing. Luckily, I was nowhere nearby at the time... but a co-worker standing next to the jug suddenly began to hear the sound of cracking glass...
Fortunately... he had been in this line of work long enough to have developed some reflexes... immediately hitting the floor mostly saved him from being drenched by the awful stuff. All the rest of us only had to put up with the smell.
As I said, this was a "former" occupation... as, for one thing, the pay-scale just didn't compensate for some of the risks involved.
Ancient reminiscences aside, as an alternative to working with such hazardous materials in order to achieve the desired metallic finishes on your future projects, you might want to give these folks a call:
@Douglas: So... if you dilute these substances for ease of application, or not, do NOT place them into any tightly sealed containers. Especially any glass jars or bottles.
Sound advice, old friend. In fact I have the two tests of brass embedded in cow and horse manure in seperate plastic coffee containers with plastic lids, each with the tinyiest of tiny holes in the top to relieve the pressure, the whole kit and caboodle then placed in a big black plastic bag (and hidden from my wife in the corner of the garage LOL)
Recently i've been researching about the arduino , and than recalled your project. i think the arduino teensy 3 might be a good fit for your project:
1. speedy Cortex-m4 - 64k ram - great for audio processing. And yes you can do fft in a 8 bitter, but you need to sample at around 5-8khz for that , while heavily optimizing code.
2. Audio library
3. The arduino has a simple scheduler , so you might not need the complexity of two mcu's , communication, etc
4. Not relevant to the project , but might interest your readers : the teensy 3 comes under a commercial friendly license, one of the major things that prevented usage of arduino in commercial projects.
@alex_m1: Recently i've been researching about the arduino , and than recalled your project. i think the arduino teensy 3 might be a good fit for your project:
Good call -- in fact I've recently been chatting with Paul Stoffregen who is one of the creators of the Teensy -- he has some amazing libraries that address both the display side and the audio side of things.
I will be blogging about this today or tomorrow -- if you are at the Maker Faire in San Francisco thsi coming weekend, you shoudl look him up and say "Max says Hi"
The obvious first question is "what's your data rate?" I2C is pretty slow: it was originally designed for passing control information between digital components in early TVs with digital chips. OTOH, I2C is very forgiving electrically: I2C components include input filtering so it doesn't matter if your SDA and SCL are ringing. This is nice if you're going between boards.
SPI can run much faster, but you have to ensure good signal integrity on your clock.
If you're running very slow, you might consider UART. Then you can debug each board separately using a terminal emulator on a PC. You need to get a USB to UART dongle with the correct voltage for UART signals for your board. Here's one from Adafruit http://www.adafruit.com/products/954.
You can also get dongles that talk I2C or SPI and talk to your boards individually from a PC, or monitor what's happening on the line. However, then you have to write I2C/SPI code on your PC which could be a steeper learning curve than using your embedded boards. Still, here are some cables from FTDI: http://www.ftdichip.com/Products/Cables/USBMPSSE.htm.
I noticed in this blog and your Golden Ratio blog you mention an alternative to Bodacious Acoustic Diagnostic Astoundingly Superior Spectromatic (BADASS)dislay, the BIGASS display. What do these letters represent?
What is the distance. If it is over a few meters use someting like RS485/Modbus. I see others have mentioned this.
I2C was designed for inside TVs. Probably the most common bus on the planet. (I think there are more TVs than cars.) NXP does make bus drivers for some distance although there are latency issues involved using them.
I have not been following some of this as closely as I would like. I did look at the neopixels to see if they might work as diagnostics on my pipe organ shift registers. (I got the player working on the second instrument yesterday, and should be working on that code at this instant ...)
In skimming through the above post I am not quite sure why you are so locked into so many ardiuno outputs. I can see that the neopixels have timing requirements, which do not lend themselves to pre-emptave operating systems.
Such jitter effects is why I like bit banging AVRs too. Although I favor assembly over library management. Were I implementing this I would probably use 74*595 shift registers which are serial to parallel converters. I test my pipe organ drivers using small pea bulb lights or LEDs before dealing with the power through the coils.
On the other hand, It looks like the neopixels already are shift registers so an I/O expander might be too much extra guff to add. My cursory overview is that basically this is just really low resolution video, with similar timing constraints.
I keep thinking I want to look at my copy of "The art of Digital audio." to see if there is a better way to do this. Most compression is based on a DCT, which is a close cousin to FFT. Since the compression is already finding the spectrum of frequencies to mask, Would it not make sense to tap into this. Of course one would need an Arm processor, with custom drivers for the neopixel shift registers. Still if one can blink a led at high frequency, then one can bit bang the pixels the same way.
You could also utilize SPI, but use it in your own custom way.
First, wire it all up as though it were an SPI, but don't put the Arduino Mega or ChipKIT pins into SPI mode. You can use the pins as GPIO for your handshaking.
The ChipKIT can set one of the lines to indicate data is ready. The Mega can poll that line when it has spare cycles. Then after the handshaking indicates everything is ready, switch the pins on both sides to be SPI and transfer the data. Switch back after the transfer.
So, that's what I'm thinking at the moment. What do you think? Is a custom interface the way to go -- (have you created one yourself?) -- or would you always try to stick with a standard protocol?
As betamax pointed out earlier, there is the UART option. I have created several protocols of my own (sometimes you just have to- I did a blog on that on MCC. Maybe it's time to resuscitate it.) as well as implemented other protocols like Modbus. (Look for my article in Circuit Cellar on creating a Modbus Master (CC Issue 200/201- March& April 2007) and creating a Modbus Slave (CC issue 216 July 2008).
My vote is for a UART for the following reasons, although I2C may meet some of this as well (I am not that familiar with it).
1. You can have duplex communication
2. You can detect framing, parity and overrun errors in hardware.
3. Each byte is self synchornising.
4. If you use the 9 bit Intel protocol you can establish an easy way to synchronise the message as a whole. (Some micros do not have the 9 bit mode but it can be added with a little software manipulation- see my app note for Cypress AN2269)
@antedelivian: My vote is for a UART for the following reasons...
UART would certainly use fewer pins, but that's not a major concern in this case. Re the framing, parity, and overrun errors, my two MCUs are doing to be sitting just a couple of inches from each other -- I don't think therre's going to be much of a problem -- as a worst case if there were to be an error the end result would just bea glitch on the main display. One the otheer hand, it wouldn;t hurt me to add parity just for the heck of it :-)
Since the Arduino display driving task is so finiky, I think that you need to have the Arduino as master no matter how you implement the interface.
With either I2C or SPI, the way to do it would be to have a data buffer in the other device already populated with the last set of results so it wouldn't have to take time to gather the results but just sends them off and goes hapily back to it's calculations. I don't know if the Duo has hardware support for I2c or SPI slave mode but a lot of processors have that. This would bring any interruption to the calculations to a minimum.
Of course it would be useful for the Arduino to know if the new data is ready before going to the bother of getting it. You can do that with a "ready" output on the other device that the Arduino can read at it's leisure.
The custom interface you describe should work as well but might be more work if there are already libraries availabel for the standard protocols.
"Have you ever noticed how each design decision has a ripple-on-effect that influences other decisions downstream?"
-Yes, that is how the life is for engineers! :) Currently I am going through something for which, such a decision that was made 15 years back. Fifteen years back, a design was upgraded to add one Ethernet port, keeping the rest of the hardware same (for minimum effort) and by adding an additional co-processor (let's call P2) having Ethernet port (MII interface). The main processor (let's call P1) on board did not have MII interface. But it was kept and not replaced to save efforts. The communication between the old main processor P1 and the new co-processor P2 was implemented using an unique interface of the new co-processor P2 which allowed the main processor P1 to access shared memory through P2. Now that Ethernet processor P2 which was added to save time then, is obsolete. The main processor P1 is not yet obsolete but the original developers have left or retired from the company; There is no straight forward replacement for P2 and none of the modern processors support that unique interface P2 had.
The lesson that I have learned from this: Whenever there is a non-standard, not so popular interface is used to do something quicker that point of time in short term, there remains a high risk for the need to redo things in long term when that special/unique interface is obsolete.
@Sanjib: The lesson that I have learned from this: Whenever there is a non-standard, not so popular interface is used to do something quicker that point of time in short term, there remains a high risk for the need to redo things in long term when that special/unique interface is obsolete.
That's a good point -- but in this case I will probably only ever build this one display, and the great thing about my "bare metal home grown interface" is that it's so incredibly siimple.
Also -- if the truth be known, it's fun making your own interface LOL
In electronics domain 15 years are like 50 years. It is @Sanjib It is just amazing that your board with that P1 processor is still functional.
It may be easier to scrap the complete board all together instead of doing reverse engineering on that uniqude interface with P2
Protocols are sliced into little threads, which are then executed on any of the available CPUs when needed. Libraries and Virtual Peripherals can then be defined completely hardware independent and don't need a specific set of peripherals anymore.
So just reuse - or better, design the Virtual Peripheral all by yourself and use it in your project. Use an extended version of the Arduino IDE or Atmel Studio. Use the Arduino language or C++. Or add your customer interface (in VHDL or Verilog), there is still space for it.
I'm not sure if you ever heard about System Hyper Pipelining, didn't you ;-))
So stay tuned and checkout my Indiegogo Crowdfunding campaign, starting in 3 days. You will hear about it, for sure, or check out Arduissimo on Indiegogo.
PS: Programming your application on a multicore environment – which could be as easy as programming an Arduino – makes a lot more fun.
PPS: After almost 20 years of single CPU programming, I'm an old man tired of carrying about interrupts breaking my timing critical single core protocol or reading specifications about yet another implementation of a standard peripheral ;-)
Many small processors, even PSoC, have USB interfaces on them. The protocol is easy to use, you can set the polling rate, and the electrical intrface should be agnostic to 3.3/5.0 volts. The drivers are usually included in a BSP and there are many web sites to assist: http://janaxelson.com.
For your particular application (ATMega as Display Driver & and a 32-bit Micro/FPGA for audio sampling and possibly FFT) I'd use a UART. The ATmega should have at least one UART and the FPGA/32-bit Micro will very likely have multiple UARTs.
All handshaking can happen in software over the TX/RX pins and transfer speeds of 1Mbps should be achievable on the ATMega and FPGA/32-bit Micro.
The other possibility ..especially if higher transfer speeds are required is to use SPI. Again SPI is an established protocol and works well for this sort of thing.
I2C was designed for use in a scenario where a master device needs to communicate to many slave devices (same with SPI, but SPI is OK for maybe upto 5-10 devices whereas I2C can handle hundreds of slave devices ) over only two wires. This I2C bus speeds are typically be slow with a Maximum bus frequency of 400KHz.....though SMBus can reach 1MBps as well....but the ATMega's TWI(I2C) interface is not capable of doing SMBus transfers.
I think that UART and SPI are ideal for this sort of thing. Coming up with a custom protocol can be fun, but is unnecessary and will require a larger time investment to get it right.
on Indiegogo. I've recommended they see if they can do a VAT-less version for us outside of the EU.
BTW, virtual peripherals have a long history: Scenix did it (Scenix did a 50MHz 8-bit PIC clone, then become Ubicom and created a 32-bit multi-threaded CPU with virtual peripherals, then went bankrupt and Qualcom bought the assets), the Parallax Propeller does it (and Parallax did a Scenix BASIC Stamp), and the XMOS chips do it.
Hey Max, you should check out a couple Arduino libraries I've written (one now in beta testing), and the hardware they run on. Full disclosure, my company makes this Arduino compatible board.
Display library is OctoWS2811. A quick google search will quickly turn it up. OctoWS2811 works with Adafruit NeoPixel LEDs and all other WS2811/2812-based addressible LEDs. OctoWS2811 drives 8 strips in parallel, so it can update 8X faster (these LEDs always use 800 kHz data rates). But the really important feature of OctoWS2811, for your project, it is accomplishes this feat using efficient DMA transfers that leave the CPU almost completely unused while the LEDs update.
All that free CPU could really help you communicate with another processor. With a traditional library, like Adafruit's NeoPixel code, interrupts are fully blocked while the LEDs update. On an AVR-based Arduino board, where the peripherals have only 1 byte of buffering, that means you can't receive data while transmitting to the LEDs.
However, another new library I've been developing for nearly the last year may completely change your 2-chip paradigm!
This new audio library allows processing CD quality audio with the ease and simplicity you'd expect from Arduino. That probably sounds too incredible to be true, but the catch is it only works on Teensy 3.1, running at 96 MHz with an ARM Cortex-M4 chip. That's a similar chip to the M3 on Arduino Due, but the M4 includes special DSP accellerating instructions, which of course my audio library uses to good effect.
The audio library already has a 256 point FFT object, and a larger FFT object is planned. The library can use the on-chip 12 bit ADC to receive audio, or an inexpensive shield provides 16 bit audio in & out. The library provides a toolkit of audio processing objects, and connection objects, so you can link them together for automatic CD quality data flow as your Arduino sketch runs. You never have to deal with moving fast data around (the library does it very efficiently). You can just use FFT.available() in your loop() function to detect when the library has completed another FFT calculation.
With these 2 libraries, rather than dealing with 2 chips and the complexity of communicating lots of data, you could write a very simple Arduino sketch that simply checks FFT.available() and then FFT.read() the spectral data and calls leds.setPixel() many times to compose your output, and then leds.show() to update all those NeoPixels. OctoWS2811 has double buffering, for awesome flicker-free animation, and the audio library automatically keeps processing incoming audio, to make this a very simple project.
I hope you'll take a look. These libraries are open source, so you can get all the code to inspect. Not only will they let you do this all on a single inexpensive board, but they'll make it far easier than any of the approaches you mentioned in this article.
If it works out well for you, I hope you'll help spread the word. :)
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.