This week I finished up the UI on our website for being able to record the flute and background separately, and I added the feature of being able to hear a playback of the audio being recorded. If a user decides to upload a recording, the ‘Start Recording’ button gets disabled, and if a user presses the ‘Start Recording’ button then the ‘Upload’ button gets disabled. Once the user starts recording they can stop the recording and then replay it or redo their recording. Also the user is able to adjust and hear the tempo of the metronome that gets played at a default of 60bpm. There are still two modifications I need to figure out 1. In the recording, the metronome can’t be heard because the recording only catches sound coming externally from the computer so I am playing around with some APIs I found that are able to catch internal and external recordings from a computer 2. I need to adjust the pitch of the metronome to a value that isn’t in the range a flute can be played at. For the Gen AI part there is a Music Transformer implementation available online using the MAESTRO dataset that focuses on piano music. I am thinking of using this instead of creating this process from scratch. I downloaded the code and tried to understand the different parts of the code. I was able to take a flute midi file and convert it into a format that the transformer can use. I want to continue learning and experimenting with this and see if I can fine tune the model on flute midi files.