Why Aren’t You Listening To Me, Dave?
Computers have been involved with the process of music creation for some time now, in a variety of different capacities – most often as an audio sequencing or effect processing tool. However, Christopher Raphael of the University of Indiana has demonstrated a program which is capable of performing live, in time with a human instrumentalist, without missing a beat.
Robo-Rhythm Keeps Time
This is a remarkable achievement, as teaching a computer to accompany a human musician in a dynamic way is no trivial matter. Raphael has likened it to the difficulties surrounding speech recognition – the myriad of subtle inflections and accentuations that we take for granted can make huge differences to the way we interpret vocal communications. With music, there are similar issues to be addressed – whilst music is sound, and sound is basically composed of waves (which can be represented digitally), the interpretation of such signals is an extremely complex process.
The A below middle C on a piano vibrates at 220 Hertz – but in doing so, it also produces weaker waves (overtones) at 440 Hz, 660 Hz, 880 Hz, and even higher, depending on the strength of the initial signal. Also, the quality or timbre of a piano note is different to that of a saxophone – so when multiple notes of different instruments are combined, the result is difficult to break down into elements that a computer can understand.
This breakthrough has many implications for live performance, DRM, sampling and artificial intelligence – although computers may not be replacing the London Philharmonic just yet, Raphael’s program may prove to be a valuable rehearsal and compositional tool.