Coding The Algoraves
The idea of robots making music is not new – in fact, as all music technology is essentially designed to increase the potential of music making, enabling the technology to create its own music seems like a logical extension of the concept. But how far has it come – and how far can it go?
Groovy Coders
As laptops became more common and affordable and DAWs became more powerful, creating electronic music became something anyone could do (at least in theory) – but for those who wanted to show off their creations in a communal space such as a club, there was the question of how to differentiate what they were doing from simply lining up a playlist.
Flashing lights, exotic DAW controllers and frantic head-bobbing all had their place as laptop musicians moved out into the public eye, but it seems that now there is a both a new form of production and a new audience – who actually want to watch the code behind the music.
Procedurally generated music has been around for a while, for example in mobile apps such as Brian Eno’s ‘Bloom’ – where the user can toggle some parameters and the audio output can theoretically continue forever. As such, the idea of an album which you can listen to over and over is replaced by the concept of an array of parameters that generate a type of music that is constantly changing and evolving and can’t be specifically re-listened to (unless you print a hard copy as it’s being generated).
There is now a sub-culture developing in San Francisco where artists who create music generation algorithms ‘perform’ their creations live; basically the performance consists of them typing lines of code into a keyboard, with the laptop projected onto a screen for the audience to see which parameters and commands are being triggered as they listen to the corresponding audio output. For more details, check out this article over at WIRED…