Team Status Update for 05/02

Last week!

This week Gauri worked on improving the conditions under which the classifier will work well by training the classifier on images with heads behind the hand gestures. However, this proved to reduce the overall accuracy of the classifier and did not provide the result we wanted. After switching to take 4 predictions instead of two into account before producing a command to the animation and testing in dimmer lighting, the classifier now works without the need for a blank screen behind the hand gesture!

Shrutika had her hands full with integrating the animation and the classifier/camera prediction as well as the audio input code. This required her to write a client-server socket connection between the pi and her laptop where the animation would play. She also had to implement a simple control loop that continuously listened for input and then connected to the appropriate animation outcome. She also continued to play around with the baffles and is now working on the demo video because the hardware is at her house and she has friends lol.

I worked on adding more functionality to the animation such as switching between the start screen and manual and automatic mode, arrows to point to the person in automatic mode as well as text to denote the mic being spoken into. I also played around with tkinter because our initial animation was buggy but Gauri was able to fix our major bug (blurriness of rotating comovo)!

Gauri, Shrutika, and I all worked on debugging the control loop and finetuning the final project! We are now done and will work on finishing the video and final report. It’s been a great (albeit) weird semester. Thanks!

 



Leave a Reply

Your email address will not be published. Required fields are marked *