I hoped by this week to have completed the pitch to tone mapping algorithm, but it is a much more involved endeavor than I anticipated. I have been having difficulties enumerating the cases that I have to consider for when to detect notes, most importantly determining when the user starts singing, if they keep their time alignment throughout the duration of the song, and how to handle the cases where they are not singing at all. For much of my time developing, I have treated this algorithm as if it were simply performing the pitch to tone mapping, but in reality there are several aspects of the users’ performance that I have had to consider. Most recently, I have found more success in taking a top-down approach and sectioning the responsibilities of the algorithm by function.
I am currently trying to finish this algorithm up, once and for all, by the end of this weekend, so that my team and I can integrate our respective parts and construct a preliminary working system. I am not sure if I will be able to test the algorithm as exhaustively as I should, so I will set a first round of unit tests on the generated pure tones.