Gauri’s Status Report for 04/11

This week we spent time on the gesture classifier and animation for manual mode.  We met multiple times to work remotely.  I worked with Neeti on debugging, training and testing the gesture classifier with different online datasets.  We discovered that the Kaggle dataset had a few issues which is why training on it did not help the classifier classify real images of gestures that we took of ourselves and fed in.  The dataset (though large with ~4000 images) has images taken with an infrared camera and all these images are very similar and of the same person.  So when we fed in images taken by us of the gestures against a solid background the classifier always predicted the same gesture.

We tried changing gestures to more obvious ones (with a clear difference in the number of fingers) – so we switched to “ok” and “L” for right and left.  This did not help the classifier trained on the kaggle dataset.  We tried feeding in x-ray filter images taken with PhotoBooth on the Mac and this did not work either.  Another thing we tried was to train the classifier on an augmented dataset of the Kaggle images and also our own real images, but the proportion of Kaggle images was much larger and this did not help.  We then attempted to use clustering techniques on the real images we fed in to separate background and foreground (using otsu binarization) but this was not super effective since the real images varied a lot.  So finally for demo-ing the manual mode so far, we just fed in Kaggle images and passed the output to Shrutika’s pygame animation.  This works.

We are attempting to fix the classifier now by training on a new dataset created with images taken by our friends and family.  If this doesn’t work we may need to change the CNN model we are using.  Shrutika tested that the camera + RPi work and I am now working on getting a video stream frame by frame from the camera to feed the frames intermittently into the classifier and make gesture predictions to send to the animation.

We had a reality check last night when we found out that we have only about 11 days till the final in-class demo.  We are striving to accomplish as much as possible of our goals by then – this week will be intense.  Shrutika is working on the mics now and will have a better idea of how that is going tonight.  By tomorrow night, we want to have manual mode fully integrated and working and a clearer picture of the mics status for automatic mode.



Leave a Reply

Your email address will not be published. Required fields are marked *