Anita’s Status Report for 04/08

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours). 

This week was the interim demo. It went well! There were some suggestions made during the demo regarding note and pitch detection that I may integrate into my work. Professor Sullivan recommended that we try to synchronize pitches across what the user hears vs the note that we detect, and to have a relative note tracking algorithm as opposed to an absolute one. I was a little bit confused during the meeting on what that means and how I would go about making those changes, so I did some research on that and how it would improve the quality of our project. 

In addition to this, I finally am starting to test the algorithms that I made. First, I tested out the absolute note detection algorithm by running it against known frequency to note mappings. These are very basic test cases, as all the frequencies are perfect mappings to the notes. In the future I may create harder test cases with slight deviations in frequency to see how the note detection algorithm performs, but that is a low priority task, now that I know the basic note detection is working.

I was blocked last week because I didn’t know the type of data that was going to be fed into my note detection algorithm, but Anna directed me to some of the test data that she was using in the meantime (thanks Anna!). She was using a json file representing dummy data of a user trying one round of karaoke. It contained her actual singing pitch and the target pitch. When I ran my feedback algorithm this, she got a score of 69.19. More tests with better/worse singers will be needed to accurately evaluate the performance of my feedback algorithm. So, I asked Kelly to send me a copy of her vocals, and I am now doing more tests on that. I will make more significant changes to my feedback algorithm if these tests don’t seem to be going hot. 

This is still assuming the input data format, but I think it is better to move on with this assumption rather than to sit idly. I’m not sure if this wordpress can take any more photos due to storage limitations, but you can see the code progress on github. 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule? 

On track… but just barely… hanging in there… 

What deliverables do you hope to complete in the next week? 

More thorough testing of the feedback algorithm, as well as researching relative pitch algorithm shenanigans. 

Now that you are entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have you run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

Oh, I kind of already addressed this in the first question. Most of the use case requirements are regarding latency, but the use cases that my part of the project will impact are note accuracy and user satisfaction. I’ve already run basic tests on note accuracy, but I will need to do more tests to make sure that the scoring system is fair and not too discouraging. I will probably need to research some tidbits on the psychology of effective feedback and grading mechanisms. Once I have this information, I can make more concrete steps toward making sure that I hit the use case requirement for user satisfaction.



Leave a Reply

Your email address will not be published. Required fields are marked *