Sung’s Status Report for 04/25

HI EVERYONE

The past week, I spent countless hours trying to get OpenPose running on AWS. After spending about 3 full days and 8 different images, on 5 different machines, I finally got OpenPose working on Ubuntu16. It sucks that it works on nothing else, but Ubuntu16 works so I guess it’s all good. With that, I was able to train a lot of images really quickly, due to the GPU that exists on the AWS EC2 P2 instances. With that, I was able to enlarge the training data to about 5000 samples. I’m training more data this week, and doing confusion matrix implementation on some of the few misclassifications.

One thing about the speedup I found is that, we wanted OpenPose to run as fast as possible, and that happens if the quality of the image is low. An image that has size about 200KB will run on 2 seconds on AWS, whereas an image of 2 MB will run on 14 seconds on AWS. Because of this, we will have to utilize image compression so that we can run images with lower quality and boost our speedup

Team Status Report For 04/11

Hello,

This week, we were able to demo our project. Jeff demoed the Web Application portion of the project, as well as image cropping to normalize images for a 2d convolutional neural network. Claire demoed the Hardware and scripts to test our project, and Sung demoed the OpenPose + svm classifier. Sung’s portion of the demo was not working as well as we expected it to be, so we will be fixing that this week. This week we will be fixing up more of our demo to make it work, as working on the 2d convolutional neural network and making AWS work.

We also worked on the ethics assignment and are now big ethics bois.

Also happy birthday Emily!!!!

 

 

 

Sung’s Status Report for 04/11

This week, I prepared for the demo, but the demo did not work out as well as I hoped it to be. I assumed that while testing the classifier separately, when integrating it would work out as well. My testing error for the classifier was about 97%, so I was expecting high accuracy, but that did not show during the demo. As such, this week I was trying to fix that after the demo, and that is still my goal for the coming week.

Team Status Report for 03/28

We’re not sure what the purpose of the Team Status Report is now that we all have individual milestones. However, we have figured out what we each are going to do, and are currently working towards our individual goals. We have purchased more hardware to make working with the hardware feasible, and we have devised a plan to integrate our work together.

Jeff and Claire both have Jetson Nanos, that they are working with, and Sung will pass the OpenPose code/classification model to the both of them so that they can integrate it once they have their individual parts working.

Sung’s Status Report for 03/28

Hello world!! ahahah i’m so funny

This week, I worked a lot of trying to collect data for our project. There is a bottleneck in collecting data, as OpenPose segfaults when I run it with an image directory of more than 50 images. This means that in order for me to train more than 50 images (I would ideally like 100 images per gesture), I need to rerun OpenPose with a new directory of images. 50 images take around 30 minutes to finish, which means that I need to check every 30 minutes and the fact that I have to be awake makes this process a little bit slower than I expected it to be.

Other than data collection, I’ve been working on the classification model for our project. I’ve been working on using a pretrained network, and I am trying to integrate that into our project. I have found some examples where they use pretrained networks to train their new data and model, so I am trying to implement that into this project.

Restructured SOW Sung

Here is my restructured SOW with my Gantt chart for further descriptions.

Sung’s Status Report for 03/21

So this past week and over spring break, I was focused on normalizing the data that we collected. I normalized the data with the following. With each hand, OpenPose returned a 63 feature list of (x,y,autoscore) components of 21 hand points. With the (x,y) points, I would normalize each of those points relative to a sample hand we designated as our reference hand. With that, I then calculated the relative distance from the reference hand base (the palm) to every other reference hand point. As such, I had 20 reference distances from the base of the hand to other points of the hand. Using that, I made sure every other hand OpenPose recognized was scaled so that the distances of the hands were the same. I used some trigonometry to preserve the angle of the various points in the hands while scaling the distance.

With the new normalized data, I am trying to collect as much data as possible. I looked into some pretrained models I could use to make me train faster, but I am completely not sure how to integrate any pretrained models to work with the specific feature set that we have. As such, I’m still researching more into pretrained models. This is because in order for neural networks to work well, we need a really large training data set, which is particularly hard because OpenPose takes a long time to actually give the output list of 63 features (2 minutes for one image), and there is no guarantee that one image is good enough for OpenPose to actually use the hand tracker.

That being said, this week was a little bit tough for me because I had to move out and I was working on figuring out where I was going to be for the rest of the semester. However, once I move to Korea next week, I expect things to be smoother.

Sung’s Status Report for 03/07

This week, I was working on making the framework for the machine learning aspect of OpenPose. I collected about 100 images of data, and I was trying to write the feature extraction.

One way I am doing feature extraction is getting all the 21 joint locations of the hands provided by OpenPose. However, I needed a way to normalize the images of hands, as I wanted the hands to be of the same relative size, regardless of the photo I took. This meant that I needed to calculate angles of joints and take the length and change it to some relative length, while preserving the angle. I am currently in the process of writing the feature extraction. After that, I will continue to work on the machine learning portion of the project. I plan to use a neural network, and I am going to train my model by the raw model itself, and also train it using a pre-trained model. Marios told me that a pre-trained model would adjust more finely to the changes, which would help preserve the accuracy of gestures.

Sung’s Status Report for 2/29

I was not able to do anything for Capstone this week. I was hit with 3 assignments from 15440 and even though I spread out all the work to be done by Friday, the project that was due on Friday was too much and I am currently using my late day and about to use my 2nd late day to finish this project. As soon as I finish this I will transition to Capstone and do the things I was supposed to this week.

Sung’s Status Report

This week, I was working on the design review in preparation for the design presentation. As such, a lot of time was devoted into thinking about the design decisions and whether or not these decisions were the best way to approach our problem/project.

I was hesitant about using OpenCV and whether or not it could be accurate, and we recognized as a risk factor, and as such added a backup. As such, Jeff and I decided that we should use OpenPose as a backup, and have it running as well as OpenCV. We realized that OpenPose takes up a lot of GPU power and would not work well on the Nano given that the Jetson Xavier (which has about 8 times the GPU capabilities) resulted in 17 fps with OpenPose video capture. As such, we decided to use AWS to run OpenPose, and I am in the process of setting that up. We have received AWS credit and we just need to see if AWS can match our timing requirements and GPU requirements.

Our initial idea revolves around a glove that we use that tracks joints. We were originally thinking of a latex glove where we would mark the joint locations with a marker, but we thought that the glove would then interfere with OpenPose tracking. We tested this out and we found out that OpenPose is not hindered even with the existence of the glove as shown on the picture below.

This week, I have to make a glove joint tracker with OpenCV. I’ve installed OpenCV and have been messing around it, but now I will have to implement a tracker that will give me a list of joint locations. This will probably be a really challenging part of the project, so stay tuned in for next weeks update!