Neeti’s Status Report for 04/04

This week we all worked individually on our predetermined parts rather than meeting to discuss decisions.

On Monday, we met briefly to discuss our progress on our respective parts.

I spent the majority of this week working on the neural net classifier for the hand gesture recognition part of manual mode. I closely followed the tutorial that was linked in a previous status report. I used a Kaggle dataset that contained 20000 hand gesture images acquired from a leap motion sensor device that I downloaded locally. I was then able to traverse the directories and subdirectories to collect all the image paths in an array. After which, I read the images and performed some preprocessing on the images and created a numpy array of the image vectors. Finally, I used Keras and TensorFlow to train the neural net with 16000 of the images over 5 epochs with a 2-layer CNN and then tested it on 4000 of the images. The classifier was able to attain a 99% accuracy on the data from the dataset.

On Wednesday, we discussed the shortcomings of the current model and training – the accuracy was already 99.5% after one epoch, training the model took a lot of horsepower and an hour, and we realized we did not need to train the model on all 10 hand gestures provided in the dataset. We just needed to train it on the two we are using to signify the direction of rotation – thumbs up and thumbs down. Gauri also gave me input on how to save the model so we do not need to retrain it everytime we restart the device.

Since Wednesday, I modified the model to only train on 4000 images that pertain to the thumbs up and thumbs down gestures. I am also working on saving the model and creating a processing pipeline for test input from the camera modules.



Leave a Reply

Your email address will not be published. Required fields are marked *