PORTLAND, Ore. Music researchers report making strides toward a modern-day "brain cap" that can detect and recognize musical ideas in the minds of composers with up to 99 percent accuracy.
Eduardo Reck Miranda, head of computer music research and leader of the neuroscience-of-music group at the University of Plymouth, England, recently reported up to 99 percent accuracy in recognizing specific electroencephalogram patterns for musical ideas using a 128-electrode EEG brain cap with signal-processing algorithms including three neural networks.
"When the technology is more mature we will test it with musician patients at the Royal Hospital of Neuro-Disability" in London, Miranda said. "The idea is to let these patients have the opportunity to continue making music, provided that the brain damage did not impair their musical cognition. It may be that by stimulating the musical part of the brain that was involved in motor control in other words, playing instruments we can contribute to improving the damaged motor part of their brain."
EEGs are readouts of brain waves, which can be used to medically enable the early diagnosis of brain maladies. But as far as "reading minds" by recognizing ideas from those brain waves, that's the stuff of science fiction (for instance, Arthur C. Clarke's "brain-cap" in his book 3001: The Final Odyssey). Although the musical ideas tested in Miranda's research were extremely simplistic, compared with the enormous complexity of musical composition, the team's success nevertheless cracks open the door to those science fiction domains.
"I don't want to overstate what we have achieved so far, but we have shown that the idea of interfacing the brain with computers, at least for musical applications, is no longer a science fiction fantasy," said Miranda. He performed the work with Ken Sharman, a senior researcher at the Polytechnic University of Valencia (Spain); University of Glasgow psychologist Kerry Kilborn; and musician Alexander Duncan at The Sun Centre (Prades, France).
Miranda's experiments measured the spectral density of EEGs while the experimental subjects were performing various mental musical activities. By learning the pattern of amplitudes for the various frequency components derived from a Fourier transform, a neural network was subsequently able to automatically detect and distinguish between musical mental states.
"We consulted with a few psychologists to come up with the musical mental states to target. We performed three different experiments, which we refer to as auditory stimulus, active listening and musical focusing," said Miranda.
In each case spectral EEG information was recorded while the subject performed the specific mental task. Half the data was set aside for training the neural network and the other half for testing the neural net for accuracy after it had been trained. A perceptron neural network, with one hidden layer, then trained on its half of the data using a conjugate gradient algorithm. Subsequently, the accuracy of the neural network was verified by testing it on the reserved half of the data set.
The auditory-stimulus experiment tested whether it was possible to automatically detect the moment that a listener first hears a musical passage (as opposed to the silent moment before). The results demonstrated that such an automatic recognition system could achieve 78 to 89 percent accuracy.
The active-listening experiment measured whether it was possible to automatically detect and recognize the difference between passively listening to a musical passage and actively thinking through the musical passage in the "mind's eye." Surprisingly, the results showed that a system could distinguish passive from active listening with 95 to 99 percept accuracy.
The final experiment tested whether the subject was passively hearing music holistically or focusing on a specific musical instrument (identified by being in only one channel, either left or right). The weakest results were found here, but still the neural network was able to detect and recognize the difference between focused and holistic listening with a statistically significant 65 to 75 percent accuracy.
"I am now working with Dr. Maria Stokes, of the Royal Hospital for Neuro-Disability in London, in order to design further experimental scenarios like these, with the objective of establishing more situations in which musical cognition can be spotted in the EEG," said Miranda. "This is the most difficult part of this research-identifying recurrent patterns of EEG that correspond to musical mental processing."
Miranda is also working on two follow-up experiments that directly extrapolate his current results. A "brain soloist" project will attempt to extend the active-listening experiment by repeating a musical passage exactly until a composer actively listens to it. When the neural network detects that a composer is actively listening, it changes the notes played. A "brain conductor" experiment, on the other hand, will extend the holistic-vs.-focused experiment by enabling a composer to change the volume of one instrument in a quartet merely by focusing on it.
"I have some encouraging results so far, but it is too early to tell anything conclusively," said Miranda. "Also, we need to replicate our current experiments with other subjects in order to further evaluate the system. I am currently setting up a new project team [with] Dr. Stokes and engineers from the University of Oxford."
Miranda also plans to switch from the cumbersome 128-electrode brain cap to a magnetic encephalogram (MEG), which records the magnetic field generated by neural activity. Today MEGs are less well-developed than EEGs, but they hold the promise of providing more accurate, localized signals that might not even require a cap, Miranda said. (EEGs measure the difference in electrical potential on the scalp, but sensing magnetic fields does not require direct attachment to the scalp.)