Caleb’s Status Report for 10/28

This week I spent time familiarizing myself with how to manipulate audio files within Python. For example, using pyaudio to find the onset of notes and the chroma vector. I’ve also reviewed how to implement harmonic-percussive separation in Python. This function is of particular importance because a practice room environment is very likely to have percussive noise in the background. For instance, during our recording session with Dr. Dueck, noises such as a truck driving down the nearby road or someone passing by with a ring of keys jingling were all noises that affected our recording. Because we want to perform time warping with a perfect, no-noise MIDI file, we want to remove this excess noise.

The Google board was successfully set up and connected to the internet. However, a new challenge is getting the board to detect the Shure lapel microphone. The Google board does have a built-in pulse-density modulation (PDM) microphone. However, there microphone’s quality is significantly worse than the lapel microphone which would lead to worse time-warping. Also the Google board does not have the mobility of the lapel microphone and cannot be placed prime locations for picking up breaths and music. This makes detecting the microphone on the Google board a crucial step. This may involve adding additional drivers.

This upcoming week, I look forward to continuing to use sound samples collected through Dr. Dueck’s class to perform various audio filters. This week I am interested in how to take the audio data and turn in it into a vector that can be used for the ML model. Because the eye-tracking will change depending on where the user is on the page, we want to take the data of where the user is on the page and help the eye-tracking make better predictions.

We are currently still on track and will continue to work hard to stay on track.

Leave a Reply

Your email address will not be published. Required fields are marked *