Author: nganjur

Team Status Update for 05/02

Team Status Update for 05/02

Last week! This week Gauri worked on improving the conditions under which the classifier will work well by training the classifier on images with heads behind the hand gestures. However, this proved to reduce the overall accuracy of the classifier and did not provide the 

Neeti’s Status Report for 05/02

Neeti’s Status Report for 05/02

Last status report! I’m actually going to miss working on capstone 🙂 This week I worked on putting final touches on the animation such as making changes that we talked about last week and adding functionality for mode switches, text, arrows and more clarity to 

Neeti’s Status Report for 04/25

Neeti’s Status Report for 04/25

This week I worked on the animation that will be the output for our final project. The animation will represent how the actual device would have worked if the circumstances had allowed us to work with the motors, motor hats and 3D printers etc.

Working on the animation was more difficult than I expected initially as we have several layers that are part of our display as well as different modes and components to control. It has also been interesting to work on the animation because the expectations for it continue to change as I work on it more. I have been using photo editors, drawing software, as well as pygame to create a realistic simulation of our use case.

I hope to have a great visual aid to help people understand the use case and the output of our project!

Team Status Update for 04/18

Team Status Update for 04/18

This week we all continued to work on the areas of the project we began working on last week! I worked on collecting, labeling and organizing real images for the classifier. I tweaked some parameters for the classifier and ran it many times with different 

Neeti’s Status Report for 04/18

Neeti’s Status Report for 04/18

This week I spent a lot of time working more on the classifier. Throughout the week we received hundreds of images from our posts on various social media platforms requesting images as well as from family and friends. We have now collected over 1200 images 

Neeti’s Status Report for 04/11

Neeti’s Status Report for 04/11

This week we spent a lot of time getting our manual mode ready for the demo! This meant that I was primarily working on the hand gesture classifier, Shrutika was working on animation, and Gauri was working on planning and integration.

I spent Monday (04/06) and Tuesday (04/07)  writing the pipeline to process real images captured on my laptop and feed it into the CNN. However, late in the evening on Tuesday, Python decided to stop working on my laptop and after spending a couple of hours trying to reinstall the right version, dependencies and packages, we decided to run the code on Gauri’s laptop. Thus, we spent Tuesday night moving and setting up the code on Gauri’s system and then realized that the classifier did not work well on real images when trained with the Leap motion sensor dataset that we were using. We played around with the processing pipeline, the separation algorithm, changing gestures,  and augmenting the existing dataset with real images. However, none of these solutions were successful in recognizing the correct gesture.

On Wednesday (04/08), we worked on combining Shrutika’s python script that utilized pygame to animate the rotation of the device and the result of the classifier trained on the existing dataset when tested on the existing dataset. We met with Jens and professor Sullivan to demo this integrated pipeline and discussed the improvements that needed to be made to the manual subsystem – namely, creating a more realistic and stylistically representative animated device as well as creating a new dataset with real images.

On Thursday (04/09), I worked on finally reinstalling Python and the necessary packages and getting them to work on my laptop again.

On Friday (04/10), Shrutika and I met to work parallelly on different parts of the project. We began collecting images for our dataset on Wednesday, which I worked on collating – we now have close to 400 images that I have labeled and organized. Shrutika worked on setting up the Pis on her home network and setting up the camera modules in conjunction with the Pi.

Today (04/11), I completed the ethics assignment and will continue to work on the classifier and get it working at a reasonable accuracy with the real image dataset. We hope to have integrated and completed the manual mode subsystem by tomorrow night (04/12)!

Neeti’s Status Report for 04/04

Neeti’s Status Report for 04/04

This week we all worked individually on our predetermined parts rather than meeting to discuss decisions. On Monday, we met briefly to discuss our progress on our respective parts. I spent the majority of this week working on the neural net classifier for the hand 

Neeti’s Status Report for 03/28

Neeti’s Status Report for 03/28

This week we met on Monday (03/23) to discuss what to do about the microphones and we finally ordered four of the cheap adafruit microphones as only one of our devices will display automatic mode. I worked on downloading the gesture dataset. We met on 

Neeti’s Status Report for 03/21

Neeti’s Status Report for 03/21

This week we met on Monday (03/16) to float around ideas for how to transition our project to accommodate the new remote setup such as using an animation instead of a physical platform and motor as the output of our control loop. We then briefly met with Jens to get checked off and gain some clarity on the new expectations for our project and we learned that as long as we are able to prove that our software components are able to control the hardware components as promised, we do not need to actually get a hold of some of the hardware parts.

On Wednesday (03/18), we further dived into how we would transition the project and split up the work. We also came up with a list of questions to ask professor Sullivan in our meeting, about the general direction of the course, the expectations for the SoW document, as well as the availability of parts and TechSpark.

I spent some time researching guides, models, and datasets for our hand gesture classifier on Thursday (03/19).

On Friday, we met to discuss the next steps for the classifier and decided we will need to change the gestures to rotate the device in manual mode to fit existing hand gesture datasets. We also did some research on mics to use and after consulting professor Sullivan, we decided to go with cheaper omnidirectional microphones with baffles. Finally, we worked on creating our SoW document and will update it with a new Gantt chart and block diagram before we submit it this weekend.

It has been interesting transitioning to zoom meetings and I am hopeful that we can still create something cool that we are proud of! 🙂

Team Status Update for 03/21

Team Status Update for 03/21

We had our first zoom meeting on Monday (03/16)! We used the 10:30-12:30 lab time to begin to think about how we would transition our project to accommodate the remote nature of the class. We discussed which parts of the project we could keep the