Weekly Status Reports

05/05

05/05

Found and fixed an issue in the integrated classifier that was causing some of the extra time the added to the latency for detecting images!!  Sadly this was after the demo.. but now we surpass all our requirements.  Time to detect gesture has gone down 

Team Status Update for 05/02

Team Status Update for 05/02

Last week! This week Gauri worked on improving the conditions under which the classifier will work well by training the classifier on images with heads behind the hand gestures. However, this proved to reduce the overall accuracy of the classifier and did not provide the 

Neeti’s Status Report for 05/02

Neeti’s Status Report for 05/02

Last status report! I’m actually going to miss working on capstone 🙂

This week I worked on putting final touches on the animation such as making changes that we talked about last week and adding functionality for mode switches, text, arrows and more clarity to the visuals. After shrutika had worked on integration for a while I helped her debug the integration of the audio and visual input with the animation and we got manual mode working! We then got automatic mode working. Gauri, Shrutika and I worked on smoothing out the pipeline from input to animation with things such as frame rate and speed of animation screen update as well as trying a couple of things to speed up the process. We also made some more modifications to the animation to make it easier to use for new users. Finally, we are now working on creating our final video and report.

Shrutika’s Status Report for 05/02

Shrutika’s Status Report for 05/02

This week was finalizing everything and filming the video! Finalizing everything mainly involved making the overall control loop and connecting it to the animation, as though it was connected to an actual device/motors in this case. We had created all of the separate pieces in 

Gauri’s Status Report for 05/02

Gauri’s Status Report for 05/02

Last status report!  This week we wrapped up all loose ends of our project and finished integrating it all!  🙂  We also worked on our final presentation slides, I prepared for the presentation since it was my turn this time.  We think it went well. 

Team Status Update for 04/25

Team Status Update for 04/25

This week we finished all of the independent part of our projects, tested it, and worked on getting everything working to the requirements we had set for ourselves (like accuracy for the classifier, sound, …).

  • Our classifier was working at 95% accuracy for the 2 gestures, then we started testing it and realized that we need a way to classify ‘no gesture’ so we added a dataset of heads.
  • We added multiprocessing for both our audio and picture processing, which sped it up a LOT.
  • We played with the framerate on the classifier to figure out what the best balance of speed + accuracy was, and I think the current version is the best so far and is what we designed for when we gave metrics.
  • We built baffles on our microphones and played with frequency, which sped up our processing. We plan to test at different frequencies in the next few days just to see if anything is more optimal, but what we have now works fairly well.
  • Worked on the animation, trying to get it to be more visually intuitive + allow us to demo what we have worked on best. Neeti has a good version of that, and we have a plan to combine that with a video of actual people sitting around a table (side by side) so our demo video will show how the device will actually react to speech/gestures.

Right now we are working on our presentation, and this week all we have left is to integrate and make the demo video!

Shrutika’s Status Report for 04/25

Shrutika’s Status Report for 04/25

We finished our classifier this week (it works!!!) and tested it on the pi with the pi camera. We realized a lot of little things, like how much better the classifier works with a bright light on the person compared to a dim light. I 

Gauri’s Status Report for 04/25

Gauri’s Status Report for 04/25

We basically finished all the independent parts of our project this week!   I worked on a few things: Our classifier had a 95%+ accuracy on just identifying left vs right.  However, I realized earlier this week that it would classify everything it saw as left 

Neeti’s Status Report for 04/25

Neeti’s Status Report for 04/25

This week I worked on the animation that will be the output for our final project. The animation will represent how the actual device would have worked if the circumstances had allowed us to work with the motors, motor hats and 3D printers etc.

Working on the animation was more difficult than I expected initially as we have several layers that are part of our display as well as different modes and components to control. It has also been interesting to work on the animation because the expectations for it continue to change as I work on it more. I have been using photo editors, drawing software, as well as pygame to create a realistic simulation of our use case.

I hope to have a great visual aid to help people understand the use case and the output of our project!

Team Status Update for 04/18

Team Status Update for 04/18

This week we all continued to work on the areas of the project we began working on last week! I worked on collecting, labeling and organizing real images for the classifier. I tweaked some parameters for the classifier and ran it many times with different