Here's a new high def version of the dazzling 3D video/AI-driven performance displayed on the Walt Disney Concert Hall last year. Refik Anadol created the projected visuals, and the music was co-created by adaptive music/sound artist Robert Thomas, who as longtime New World Notes readers know, once experimented with voice-activated installations and location-reactive soundtracks in Second Life. (Disclosure: Also a pa.) Here's how Robert and his collaborators created the music for this event -- and by "collaborators", I'm including the machine learning algorithms they created to assist them, which searched through the LA Phil's massive archive of performances, merging disparate fragments of music together in new and novel ways:
"Parag Mital made a browser which let [us] search through hundreds of terabytes of material, to be able to hear different bits of different performances," he says. (Google Arts and Culture provided the technical backing.) Robert and his collaborators then took those clips and created a wealth of new sound files, then ran those through an audio analysis.
"[We] trained machine learning processes on that audio, and then tried to get it to generate new music." So for instance, in the soundtrack, you'll hear a segment from Stravinsky's Rite of Spring. But then, "it turns into a machine learning hallucination of the melody which goes somewhere else that Stravinsky didn't write."
Another process to create the soundtrack converted their selected music files into wave forms, and put those through another machine learning algorithm to generate a new sound file. Another algorithm broke down recordings into many different fragments, then recompose into forms. "And then we would have bits of Mahler try to re-synthesize bits of Stravinsky... so you would make a Stravinsky passage out of an actual recording of Mahler."