Ellen’s Status Report for March 6

This week I did a real mishmash of stuff for the project. I finished the design presentation slides as well as preparing my remarks for the design presentation, which I demoed for the team to get feedback.

I finished coding the transcript generator sans speaker ID — this included a multithreaded section (so I had to learn about python threads), audio preprocessing (downsampling to the correct rate and lowpass filtering), as well as going back through my previous code and making the data structure accesses threadsafe.

Since we received the respeaker mic in the mail, Cambrea recorded some audio clips on it and sent them to me so I could test the two speech to text models I had implemented in the transcript generator. The performance of DeepSpeech was okay – it made a lot of spelling errors, and sometimes it was easy to tell what the word actually ought to have been, sometimes not so easy. (If we decide to go with DS S2T, maybe a spelling-correction postprocessing system could help us achieve better results!) CMU PocketSphinx’s output was pretty much gibberish, unfortunately. While DS’s approach was to emulate the sounds it heard, PS tried to basically map every syllable to an English word, which didn’t work out in our favor. Since PS is basically ruled out, I’m going to try to add Google cloud speech to text to the transcript generator. The setup is going to be a bit tricky because it’ll require setting up credentials.

So far I haven’t fallen behind where my progress is supposed to be, but what’s interesting is that some past tasks (like integrating speech to text models) aren’t actually in the rearview mirror but require ongoing development as we try certain tests. I kind of anticipated this, though, and I think I have enough slack time built into my personal task schedule to handle this looking-backwards as well as working-forwards.

This week my new task is an initial version of speaker ID. This one does not use ML, does not know the number or identity of speakers, and assumes speakers do not move. Later it’ll become the basis of the direction-of-arrival augmentation of the speaker ID ML. I’m also giving the design presentation this week and working more on the design report. And by the end of next week, Google s2t integration doesn’t have to be totally done but I can’t let the task sit still either; I’ll have made some progress on it by the next status report.

Leave a Reply

Your email address will not be published. Required fields are marked *