This week I implemented the SNR-rejection system of our project. It will display a red alert in the front-end of the app if the audio is rejected due to it having too much noise. If the SNR-rejection system requirements are satisfied, the audio will be processed as usual (see here).
I also focused on testing our systems. I first created a bunch of piano files in Garageband containing different audios of monophonic piano files. The audios at first were pretty simple and short for us to be able to use these and see how well our systems work initially. Then, I made a couple of longer and more complex audios to be able to test our systems with something more realistic.
I run these files on our systems and they seem to work fine on our rhythm and pitch processors but not in the integrator. At first one issue was the integrator was outputting all rests. I realized the issue was that we changed code to normalize the signal before putting it through the rhythm process and therefore I had to modify a parameter that sets the height required of a signal for something to be a peak to a much lower value since the signal output normalized ranges only from 0 to 1.
After fixing that running the integrator is not being too accurate as of right now. It seems like some notes detections are innacurate in terms of pitch. We also discussed with Tom that the way we are currently integrating the systems might not be the best so we might have to re-implement the integration system.
This week I will be focusing on the testing part and try to find what issues are going on with our integrator as well as possible fixes. I think we are on track.