Met with Professor Stern to talk about the motivation behind and applications of MFCCs on speech detection. As of now, mean square error is not an excellent indication of correlation between two audio samples, so he recommended that we look into dynamic time warping. Vyas told us that this might extend past the scope of our project in terms of capturing every possible utterance of “Hey Siri”, but it might be useful if MFCCs continue to prove unhelpful.
Worked on designing our in-lab demo from end-to-end. Investigating the use of bash scripting to handle time synchronization because research into system time sync through Python has come up unfruitful.