In what is described as "essentially a human-scale system of reproducing music," researchers at the University of Rochester claim to have developed a way to digitally reproduce an original music performance from a file that is nearly 1,000 times smaller than a regular MP3. The announcement was made at the International Conference on Acoustics Speech and Signal Processing currently being held in Las Vegas, where the researchers described how they had encoded a 20-second clarinet solo in less than a single kilobyte.
The approach they used was to actually reproduce - or replay - the original performance using a virtual clarinet and a virtual performer created from computer models that attempt to incorporate virtually all aspects of how the instrument affects the sound and how a performer interacts with the instrument. Based on real-world acoustical measurements, the virtual clarinet model captures aspects such as backpressure in the clarinet mouthpiece for every different fingering, as well as the ways in which sound radiates from the instrument.
The virtual player model captures how a performer actually interacts with the instrument - including the fingerings, the force of breath, and the pressure of the player's lips - and how it would affect the response of the virtual clarinet. The original sound is then reproduced by feeding the record of the player's actions back into the computer model.
Although the reproduction isn't perfect - for example, the researchers are still working on including the effect of "tonguing" - it's claimed to be very close to the original sound. Hear it for yourself in these short (20 s) sound clips:
Human performance recorded using MP3 format (3.9 MB wav file)
Virtual performance using new compression (3.9 MB wav file)
This approach is somewhat reminiscent of Zenph Studio's "re-performances" of classic historical piano recordings, which attempt to digitally capture all the musical nuances on an original recording in order to "re-perform" and re-record the performance using modern technology. (See "Music software to "re-perform" jazz piano masterpiece".)
Both the Zenph Studio approach and the new reproduction scheme are currently limited to handling only one musical instrument at a time, but this could change in the future. For example, the University of Rochester's Music Research Lab has developed a method of separating multiple instruments in a mix, opening up the possibility of using this technology on multi-instrument recordings.
Obviously the accuracy and quality of recordings using these sorts of techniques will only improve with time, leaving open many questions and possibilities about the future. In fact says Mark Bocko, professor of electrical and computer engineering and co-creator of the technology developed at the University of Rochester, "Maybe the future of music recording lies in reproducing performers and not recording them."
Comments, questions or suggestions? Email me at firstname.lastname@example.org.
The art and technology of digital music performance