Featured Posts

Generative Music | Performance 1

October 4, 2018

 

 

 

Concept


I started with the question: Could I take the songs I absolutely adore, boil them down to its essence, and make a new song out of them?

 

So in this exercise / experiment, I intended to Markov chain 3 of my favorite songs into a single Franken-song.

 

Here are the ingredients:

 

(1) Dearly Beloved, from Kingdom Hearts OST by Yoko Shimomura

 

(2) Passionfruit, from More Life by Drake

 

(3) Gymnopedie No.1 by Erik Satie

 

And this was my workflow:

  1. Find MIDI version of these songs

  2. Parse MIDI files into .txt files

  3. Generate Markov chains of the text files

  4. Generate MIDI from the generated chains

1. Finding MIDI version

 

I went to Musescore and I was able to find all 3 songs. Here are the links:

 

2. Parsing MIDI files into .txt files


This was where I got really confused and encountered many, many roadblocks.

 

I started with Dearly Beloved MIDI file. When I ran it through the code provided in class, I realized that none of the message_components had "note_off". After browsing around, I figured that "velocity=0" means the strength of the key played is 0, and that time denotes the duration between the current and previous note. So, if the velocity is 0, and the time is 0, the array denotes an end of a note.

 

 

Then I noticed for me to generate a Markov chain, I need to define what a single state is. Is it a single note with a specific duration? But then how do I define a chord? How do I define a more complicated phrase like this?

 

 

I could not wrap my head around it and I wrote a parsing code that did not work for the phrase above, but would work for chords.

 

for item in message_components:
            if 'note=' in item:
                note = item.split('note=')[1]        
            if 'velocity=' in item:
                vel = int(item.split('velocity=')[1])
                if vel == 0:
                    # ! velocity 0 means end of note
                    # ! only save time information (duration) when it is end of note
                    isSaveTime = True 
                else:
                    # ! if velocity is more than 0, means start of note
                    # ! don't save time if it's start of note
                    isSaveTime = False
                    print ('note: ' + note)
                    notes.append(str(note))         
            if 'time=' in item:
                if isSaveTime:
                    dur = int(item.split('time=')[1])
                    # ! if velocity and time is 0, means current note plays at the same time as previous note
                    # ! the duration should follow the previous note
                    if dur > 1: 
                        print ('duration: ' + str(dur))
                        notes.append(', ')
                        index+=1
    

 

I also chose to generate two separate files for each track of the song: one for notes, and the other for duration.

 

 

 

These are the results of the parsing code: 

 

 

The final code was made specifically for Dearly Beloved, so when I ran it on Passionfruit and Gymnopedie, I knew that it would yield problematic reading. But I moved on.

3. Generate Markov Chains

 

Since the example provided in class is using JS, I googled and found a Markov chain package for Python called Markovify.

 

This library allows generation of Markov model from multiple corpuses of text, so I would be able to frankenstein all my 3 songs.

 

 

import markovify

isEnough = False

 

# Get raw text as string.
with open("../read_result/db_track0_notes.txt") as fa:
    with open("../read_result/gymn_track0_notes.txt") as fb:
        with open("../read_result/passion_track0_notes.txt") as fc:
                text_a = fa.read()
                text_b = fb.read()
                text_c = fc.read()

 

# Build the model.
model_db = markovify.Text(text_a)
model_gymn = markovify.Text(text_b)
model_passion = markovify.Text(text_c)

model_combo = markovify.combine([ model_db, model_gymn, model_passion ], [ 1.8, 1, 1 ])

 

for i in range(5):
    print(model_combo.make_sentence())

 

 

However, I could not specify how long the resulting sentence should be, so I had to run the code multiple times to get chains that are of roughly the same length. I took the results and pasted them into new text files.

 

Also, I was fully aware that these 3 songs run on different tempo and different scale/key. Maybe I should have transposed the song and adjust the temp – Gymnopedie No.1's tempo is about half the other two – but I decided to just run with it and see what would happen.

 

 

4. Generate MIDI

 

For the last part, I modified the sample code provided to:

  • Clean up the data (remove empty spaces, breakup chords)

  • Recognize chords and write them into notes playing at the same time and for the same duration

  • Write into two channels from 4 files (notes channel 0, durations channel 0, notes channel 1, durations channel 1).

 

 

Results

 

I ran the entire workflow multiple times (maybe too many times) with different combinations, but below is a playlist of some of the (more listenable) results. They sound really horrible, but you can sort of get the original tunes. I like how in the franken-song it starts off sounding like Dearly Beloved and then quickly melts down to gibberish but you can sort of get Passionfruit and Gymnopedie in some parts.

 

 

Share on Facebook
Share on Twitter
Please reload

Related Posts
Please reload

Generative Music | Latent Space Sequencer

December 13, 2018

1/10
Please reload

Recent Posts
Please reload

Search By Tags