The past week, I spent countless hours trying to get OpenPose running on AWS. After spending about 3 full days and 8 different images, on 5 different machines, I finally got OpenPose working on Ubuntu16. It sucks that it works on nothing else, but Ubuntu16 works so I guess it’s all good. With that, I was able to train a lot of images really quickly, due to the GPU that exists on the AWS EC2 P2 instances. With that, I was able to enlarge the training data to about 5000 samples. I’m training more data this week, and doing confusion matrix implementation on some of the few misclassifications.
One thing about the speedup I found is that, we wanted OpenPose to run as fast as possible, and that happens if the quality of the image is low. An image that has size about 200KB will run on 2 seconds on AWS, whereas an image of 2 MB will run on 14 seconds on AWS. Because of this, we will have to utilize image compression so that we can run images with lower quality and boost our speedup
This week I got my computer back and life is much better. I have been able to work on progress on regetting the Django web application deployed to AWS. It is still proving to be a little tricky.
As for OpenCV hand recognizing and NN classifier, I tested the performance and realized the simploy normalizing the image had very poor accuracy (<60 percent), and as a result the classifier would also be very bad. Thus, given Sung’s SVM is some good stuff, we have decided to abandon the secondary classifier.
But, instead I am now using OpenCV to generate different lighting conditions given our test image. By using Gamma correction, we can now artificially simulate different lighting conditions, to test our classifier. Also, I am experimenting with using OpenCV to resize images, as smaller images have better OpenPose performance, seeing how small we can go before it stops working.
Next week, I plan to finally wrap up the web application deployment and hopefully smash my old computer for Marios.
Sung is doing good this week. The parts are ready for more extensive testing and demo. OpenPose on AWS with GPU runs a lot faster, and now we have more data thus more accurate training. OpenCV is a lost cause so goodbye OpenCV.
The presentation will be this week (good luck Sung)! We should have a lot of elements filmed by the end of this week for the video.
This week, there was a lot less done because all the classes have everything assigned to now. I have generated three videos for testing, and we will be getting results this week for the presentation and then the demo.
Next week, we will have our presentation. More extensive testing will be done and the demo video will be filmed.
I finished some additional random inputs using mostly our makeshift ASL. I compiled the most popular search queries from the last 12 months based on Google Trends with the keywords who, what, when, where, why, and how. Then, I took out duplicate queries. Now, these queries can be included in the list of randomized normal ones. Here is a gif showing the question “what is a vsco girl” with all the appropriate spaces.
(Nevermind doesn’t seem like gifs work. Try this link.)
Aside from that, I was also able to get the camera working and setup, as per the Wednesday check in.
Next week, I want to make sure that we have all the AWS instances set up and running in conjunction with the Nano.
This week we continued working on our individual portions, but we began focusing more heavily on getting more EC2 instances setup and to check performance of the 1 gpu instance. Our main limitation right now is a lack of training data and by getting this set up we can hopefully have up to 6 instances running to greatly speed up running OpenPose on our gesture images we have collected generating much more training data from the current 400ish. This will hopefully greatly improve our classifier performance.
Unfortunately, my new computer has still not yet arrived and has been delayed until Monday. So I’ve been continuing to work on my Jetson Nano. I have been continuing to work on the OpenCV aspect of the project, both through SSHing Andrew machines and local machines to test real speed on Nano.
In addition, I am setting up my own AWS account so we can have more instances of EC2 running to better generate training data.
This week, we were able to demo our project. Jeff demoed the Web Application portion of the project, as well as image cropping to normalize images for a 2d convolutional neural network. Claire demoed the Hardware and scripts to test our project, and Sung demoed the OpenPose + svm classifier. Sung’s portion of the demo was not working as well as we expected it to be, so we will be fixing that this week. This week we will be fixing up more of our demo to make it work, as working on the 2d convolutional neural network and making AWS work.
We also worked on the ethics assignment and are now big ethics bois.
Also happy birthday Emily!!!!