Vanessa’s Status 3/30

Vanessa’s Status 3/30

I continued working on performance score evaluator from last week. Following are the several problems that I fixed:

1.Wrong time signature

When I recorded user’s performance and saved it as a MIDI file, it had different time signature compared to the original musicXML file. I tried to manually change the time signature while parsing the MIDI file using Music21 library, but the process was too complicated. Thus, when recording user’s performance from Garageband, I found a way to set its time signature which removes this complicated process

2. Missing notes

My algorithm that I wrote last week compared the two notes/rests list (one from parsed musicXML, one from MIDI). This works well for most of the cases except for one case – missing notes. If user skips some notes, while comparing, it doesn’t know if the user missed a note or played the wrong note. There are two cases for this: (a) if the user missed a note, it should skip that one missed note from the original notes list while comparing, (b) if the user played a wrong note, it should keep comparing the two notes list without skipping anything. However, there is no way to determine which case we are in while comparing the notes one by one, so I tried to run both cases and get the maximum score. This works properly but takes a very long time (several minutes) because it recursively calls the function again and agian until it reaches the end of the list. After trying various ways, I decided to write a new algorithm which is comparing each note’s offset. Offset of a note is simply its position within the piece (list of notes), so by comparing this value, I can easily determine rather the user played the wrong note or missed a note.

3. Rests in the beginning of the recorded MIDI file

It takes time for the LED matrix to light up after clicking the record button from Garageband. Thus, this creates rests in the beginning of the recorded user performance. This shifts the user’s notes/rests list to the right which makes it harder to compare with the original notes/rests list. Moreover, the function will think that the user have played the wrong notes/missed some notes when it sees unnecessary rests in the beginning, which will lead to deductions. In order to fix this, while converting Stream data structure (internal data structure of Music21) to notes/rests list, I ignored rests until I see a note and stored the offset of the first note. This offset is used while comparing the two notes/rests list to get the relative positions of rests and notes in the piece.

There are still several problems to fix:

  1. Comparing two files with different bpm
  2. Keyboard pitch control (off by half a key)
  3. Automating the process of recording-storing as an aif file-converting to a MIDI file

which I will continue working on next week.

Our team also worked on integration this week for next week’s demo. We can now convert Lizzy’s OMR output to musicXML through musicXML converter, and can send this to Raspberry Pi using a python script. We have updated our Gantt chart and so I am currently on schedule; we will continue working on fixing bugs and integration next week after demo on Monday.

I also wrote the update for my ethics assignment after our discussion on Monday.

Leave a Reply

Your email address will not be published. Required fields are marked *