Rachel’s Status Report for 11/13

This week, I mostly worked on data collection and adapting the scripts to incorporate audio with our script. I found another modules (gtts and playsound) that are able to create and play audio files relatively quickly (without noticeable delay from a user perspective), so we will be using that instead of pyttsx3, which had a really long delay. I added in some handshaking signals between the arduino and python programs, which slowed down the prediction and output rate to be about 0.27 gestures per second, which is significantly below our target of 2 gestures per second. In changing the arduino script back, I noticed that I was sending new line characters, which was being ignored by the script but that line that was sent could have been better used by sending actual data. After fixing that, our glove can make about 17 predictions per second. I am currently working on incorporating the audio properly, so that there isn’t a lag between streamed in data and the outputted audio– for reasons unknown to me at the moment, the handshaking signals I was passing around before are not working.  Since the changes we plan to make in the next couple of weeks do not involve changes in what the data looks like, I also had my house mates collect data for us to train on.

This week, I plan on fully integrating the audio and getting more people to collect data. I will also begin to work on the final presentation slides as well as the final report. I would say we are on track since all that remains is collecting data and training our final model (we are near the end!). We also have ordered a bluetooth arduino nano, which we will have to switch out for our current arduino– this will also require some changes in the scripts that we have been using, but it shouldn’t become a blocker for us.

Leave a Reply

Your email address will not be published. Required fields are marked *