Breaking News

Building a Mandolin-to-MIDI bridge with a PSoC

2 saves
Page 1 / 2 Next >
View Comments: Threaded | Newest First | Oldest First
User Rank
The better way
Mike@FH   8/6/2015 4:38:55 PM
Professional applications usually perform this task using wavelets.  This should also give a result faster than autocorrelation.

User Rank
Re: The better way
GSKrasle   8/6/2015 6:02:23 PM
I like the project, all-in-all this an interesting way to get some advanced practice in DSP, and to prove (and IMprove) your chops. It's fundamentally 'JTFA'-appropriate (Wavelets!); you could do worse than use this as an excuse to delve there. 

But you are implicitly assuming the instrument is in-tune/player is competent (think fretless instruments). What happens if a 'note' is 'off'? Will the project find the closest one, or will it miss it entirely? 

Conceptually, if you could identify the biggest frequency (presumably the fundamental), and then determine both the closest valid note and the distance from that note you would have information that could be used to tune an instrument and/or school the player. (Or autotune like that Peavey guitar [].) You could use a tunable comb filter to then remove the note and its harmonics (as long as it was of significant amplitude), with the residual signal being re-processed to catch other notes, even if they were much lower in volume. If the tunable filter were locked-on with a PLL, you could even catch glides and bends (or whatever they call that effect). 

This is similar to some work I've done in Linguistics, identifying and tracking the fundamental of voiced speech. That, along with the shape of the transfer function, and the presence of sibilance, all developed as functions of time, is sufficient to reproduce quality expressive speech, or, by fiddling with one or another, to alter/disguise it. Or autotune it.



User Rank
Re: The better way
mithrandir   8/6/2015 8:49:06 PM

Sadly the extent of my DSP knowledge ends at FFTs and correlations, I know that wavelet are used but never got around to implementing these. Partially because I have a feeling these are pretty math heavy to calculate in smaller embedded systems. I could be wrong on this one, so if you could point me to a good starting point I'll give it a shot.


You're absolutely right, I assume the instrument is tuned and is fretted. Part of the reason this is required is because MIDI in essence can only take 'correct' note values(as they are simple integers). Even if I could identify a mistuned instrument note, I cannot really make an educated decision on what the MIDI number should be. Differently tuned(say different schools) are still fine as the notes may be different but still in tune.

The tunable comb filter to reject specific frequencies is especially interesting. The code currently can recognize one note. Chords are a bit tricky. If I can mix in your approach it might just be possible(processing overhead-wise) to catch a couple more to identify chord patterns. Thanks for the advice, I'll see if I can get around to doing this sometime in the coming weekends :)





Most Recent Comments
Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed