Team Status Update for 04/25

This week we finished all of the independent part of our projects, tested it, and worked on getting everything working to the requirements we had set for ourselves (like accuracy for the classifier, sound, …).

  • Our classifier was working at 95% accuracy for the 2 gestures, then we started testing it and realized that we need a way to classify ‘no gesture’ so we added a dataset of heads.
  • We added multiprocessing for both our audio and picture processing, which sped it up a LOT.
  • We played with the framerate on the classifier to figure out what the best balance of speed + accuracy was, and I think the current version is the best so far and is what we designed for when we gave metrics.
  • We built baffles on our microphones and played with frequency, which sped up our processing. We plan to test at different frequencies in the next few days just to see if anything is more optimal, but what we have now works fairly well.
  • Worked on the animation, trying to get it to be more visually intuitive + allow us to demo what we have worked on best. Neeti has a good version of that, and we have a plan to combine that with a video of actual people sitting around a table (side by side) so our demo video will show how the device will actually react to speech/gestures.

Right now we are working on our presentation, and this week all we have left is to integrate and make the demo video!



Leave a Reply

Your email address will not be published. Required fields are marked *