Last week after class I tried training a performanceRNN model with my Yoko Shimomura with more steps (>30,000), but it didn’t give me a good generation without perpetually-sustained notes.
Because of that failure and the past performances, for this final performance I want to create something that moves beyond generating musical phrases similar to my dataset. I decided to use MusicVAE.js to create music from latent spaces.
The final code is a combination of Torin Blankensmith’s Melody Mixer (blog post, repo) and Tero’s Latent Cycles – honestly it’s just a much simpler version of Latent Cycles.
I used Torin’s code for one of his Melody Mixer demos as a base, and I wrote my own Tone.JS code so that I can play the interpolated note sequences in a way that I wanted. Basically each note sequence behaves like a toggle-able loop, and the second row plays 2x as slow as the first row.
The 2 source melodies are the default melodies provided in Torin's code.
I think it’s interesting to emulate Steve Reich / Brian Eno with the latent space sequencer, because the adjacent tiles are similar to each other and playing them at different speed / start time creates interesting sound texture.