Here is a link to our design report: https://docs.google.com/document/d/1AmPFfLT10oq7eKXW-1Z7OjMkhDm0K1LW/edit?usp=sharing&ouid=113756577354127239169&rtpof=true&sd=true
Team status report for 2/22
This week we spent a lot of time developing with the device, now that we have the Raspberry Pi on hand. David did research into how we are going to begin integrating our other hardware components like the speaker/microphone into our solution. Kemdi worked on allowing the device to begin communicating with first party hosted models. Meanwhile, Justin continued to lead the effort on developing the website alongside out strategic pivot we made a couple weeks ago. After the pivot we made, we don’t foresee anymore changes to our plan. The main risk right now is integrating our hardware with our software properly. This will likely be the most time consuming task.
Kemdi Emegwa’s Status report for 2/22
This past week, I spent some time testing the code I wrote last week, which allowed the device to send queries to a model hosted by a third party. Additionally, I started writing code which allowed the device to target a first party model hosted on a docker container. We are currently ahead of schedule and this next week, I will spend finalizing the abovementioned code.
Kemdi Emegwa’s Status Report for 2/15
I mainly spent this week writing the code for the query workflow. Now we have the preliminary code which uses the CMU PocketSphinx Speech to Text engine and sends this query from the device to the server hosting the model. It then receives the query result and using Espeak, it uses the microphone to output the query. I am currently on track and will be looking to make this more robust and will start working on the dockerization of the model.
Kemdi Emegwa’s Status Report for 2/8
This week I primarily worked on researching the mechanisms that we are going to use for text-to-speech/speech-to-text. Python has many established libraries for this purpose, but we have the added nuance that whatever model we use, will have to run directly on the Raspberry PI 4. I was able to find a lightweight, open-source model that was actually developed by CMU researchers called PocketSphinx. This will likely work well for our use case because the model is small enough to run locally on limited hardware. We are currently on schedule and for the upcoming week I plan to finish the python code for the Raspberry PI so we can start utilizing the speech-text on the device.