Status Update 11/11

Chris – The first half of the past week for me was devoted to preparing for the mid-point demo. The three of us spent some time together to work out some problems we ran into when integrating all of our previous work. For me specifically, I worked with Michael to fix some problems with the key-finding algorithm when integrating with the machine learning module. Besides that, I also spent some time preparing music for the demo; specifically, we decided to use some Beatles songs for the purpose of the demo. Most of the sources we found online contain multiple tracks so I had to manually find out which track the melody is and strip the rest of the tracks. Additionally, to give me a better sense of music software user interface, I did some competitive analysis on some existing music making software, including MusicLab and Soundation. For the coming week, my goal is to make a design for the web app user interface to replace the current one and work with Michael to implement the new design.

Michael – Besides working with Chris on what was mentioned above, I also spent significant time working on fixing the bug with some more complex midi files that was discussed in the midpoint demo. It ended up being an issue with extended periods of rest and this issue is now resolved. The goal for this week is to get the midi file with chords to play in the browser and also start to look into using the midi keyboard

Aayush – Spent the first half of the week working on the demo. I was tuning the algorithm, passing in Beatles songs and evaluating the chords generated. I found approximately 60% of the chords were very similar to the original, and around 75% of the chords sounded like what I would consider subjectively acceptable outputs of the algorithm. During this I also ran into the bug mentioned above for the first time. Midi files with multiple bars of rests would crash our output save. My plan for the next week is to accumulate data and gather statistics about our chord progressions. I would be trying to answer questions like –

a) What percentage of chords predicted needed to be changed?

b) What percentage sounded good but differed from the original?

c) Collect a list of 10-15 songs which we can use for evaluation. <Evaluation methods described in design doc>

I talked to Chris about starting the frontend for our testing mechanism while I collect the preliminary data in (point c)). We can then have a decent testing mechanism to use for evaluation and further improvement.

Leave a Reply

Your email address will not be published. Required fields are marked *