Jeff’s Status Report For 04/25

Hello,

This week I got my computer back and life is much better. I have been able to work on progress on regetting the Django web application deployed to AWS. It is still proving to be a little tricky.

As for OpenCV hand recognizing and NN classifier, I tested the performance and realized the simploy normalizing the image had very poor accuracy (<60 percent), and as a result the classifier would also be very bad. Thus, given Sung’s SVM is some good stuff, we have decided to abandon the secondary classifier.

But, instead I am now using OpenCV  to generate different lighting conditions given our test image. By using Gamma correction, we can now artificially simulate different lighting conditions, to test our classifier. Also, I am experimenting with using OpenCV to resize images, as smaller images have better OpenPose performance, seeing how small we can go before it stops working.

Next week, I plan to finally wrap up the web application deployment and hopefully smash my old computer for Marios.

Team Status Report For 04/18

Hello,

This week we continued working on our individual portions, but we began focusing more heavily on getting more EC2 instances setup and to check performance of the 1 gpu instance. Our main limitation right now is a lack of training data and by getting this set up we can hopefully have up to 6 instances running to greatly speed up running OpenPose on our gesture images we have collected generating much more training data from the current 400ish. This will hopefully greatly improve our classifier performance.

Jeff’s Status Report For 04/18

Hello,

Unfortunately, my new computer has still not yet arrived and has been delayed until Monday. So I’ve been continuing to work on my Jetson Nano. I have been continuing to work on the OpenCV aspect of the project, both through SSHing Andrew machines and local machines to test real speed on Nano.

In addition, I am setting up my own AWS account so we can have more instances of EC2 running to better generate training data.

Jeff’s Status Report For 04/11

This week I prepared for our demo, getting the OpenCV hand recognizer and normalizer working, while working out kinks in deploying the web application.  There remains problems linking the channel layer of Django which is an EC2 instance of Redis.

However, I was not yet able to solve these yet, as unfortunately my computer died spectacularly and I am forced to order a new one. :'(

Until then, I have just been getting my new Nano setup for OpenCV testing.

 

Jeff’s Status Report For 4/04

Hello from home!

This week, I focused on getting AWS set up and hosting our web application, switching from just local hosting. AWS set up took a while to figure out how to get the credentials, but the web application should be hosted, and now I am just doing some experimentation that all the features still work, ie we can still connect via websockets and send data through. In addition, I’m continuing to tinker with the UI/UX, adding some features like parsing the message to add some more visual elements like emojis to the currently only text based output.

In addition, for the OpenCV part after doing some further investigation and talking it over with Sung, we decided that OpenCV would be better suited for use in normalizing images for use in Sung’s image neural network classifier. Even after adding different patterns and colors as suggested during last week’s meeting, we realized there was simply OpenCV with a glove simply did not have the capability to reproduce OpenPose’s feature extraction by itself. Too often in our gestures were markers covered by our gesture, and without the ability to “infer” were the markers were, OpenCV could not reproduce the same feature list as OpenPose requiring a new classifier. Instead we are using OpenCV to just extract the hands and normalize each image so that it can be passed in. I have worked on extracting the hands, and I am continuing working to normalize the image first by using the glove with a marker to standardize the image, and hopefully eventually without a glove.

Jeff’s Status Report For 03/28

Hello World!!!! hahaha am i funny too

This week I continued to work on refining the design of our web application. In addition, I began to focus more heavily on the OpenCV glove tracking component. To start rather than use all 20 markers that OpenPose has, I am focusing initially on 6, 5 on the palm  knuckles and 1 on the wrist. (the glove is a clear latex glove). I was able to get overall body and hand detection and the segmentation of the body and hand from the background working, although it is not perfect at this time and I will continue to work on it. Furthermore, when doing the markers, I experimented with different ways of doing it from creating square markers and doing edges, but the markers lacked a clear enough edge for it to work super well, as well as using SURF, which did a sort of blob detection, but performance was not super great, but the markers lacked enough contrast. Finally, the best solution so far was to use color detection to detect the color of the markers which was red, which worked for the knuckles but not so much of the wrist. The color of my hand is kind of pinkish, which caused less ideal results on the wrist marker. Hopefully, when the green marker ordered comes, performance will be much better.

My SOW and Gantt Chart was emailed to Emily earlier this week!

Team Status Report For 03/21

Hello from Team *wave* Google!

This week we focused on getting resettled and refocusing our project given the switch to remote capstone. For the most part, our project is mostly intact with some small changes. We did cut out the physical enclosure for our project, given TechSpark closing, but this was not an essential part of project, and we eliminated live testing instead focusing solely on video streams of gestures, hopefully that can be gathered remotely from asking friends.

To facilitate remote capstone, we worked to segment our project into stages that we each could work remotely on. We narrowed down the inputs and output of each stage so that one person would not rely on another. For example,  we determined that the input for OpenPose would be images and that the output would be positional distances from the wrist point to all the respective points as a JSON, something that OpenCV would also output in the future. We also set up Google Assistant SDK so that the text inputs and outputs work and are determined. These inputs and outputs will also be the inputs to our web application This will allow us to do pipeline testing at each stage.

Finally, we decided to also to order another Jetson Nano given we have enough budget, which eliminate another dependency as OpenCV can be tested directly on this new Nano.

More detail on the refocused project is on our document on Canvas.

PS: We also wish our team member Sung a good flight back to Korea where he will be working remotely for the rest of the semester

Jeff’s Status Report For 03/21

This week, I worked with Claire and Sung on refocusing our project so that we could work with little physical interaction. We also fixed our gesture list after noticing from before Spring Break that some gestures that did not show the wrist or showed only the back of the hand were not recognized by OpenPose.

Furthermore, I continued to work on the web application. I set up Docker to run Redis, which will allow the channel layer. This will allow multiple consumers to connect to the web socket and send information, ie the command that our algorithm has recognized as well as the response from Google Assistant SDK.

In addition, I began to get myself familiarized with OpenCV which is to be used in conjunction with our designed glove for a less computational intensive alternative to OpenPose. I began experimented with OpenCV and marker tracking, which is something I will continue next week. The glove currently is simply a latex glove with a marker indicating the key points. I may switch to using a more permanent marker like tape in the future.

Jeff’s Status Report For 03/07

This week I continued to work on the web application, working again on setting up the channel layer and web socket connections. I also decided to work more on setting up the Jetson Nano to run OpenPose and OpenCV, as finalizing the web application was less important than catching up to the gesture recognition parts of the project.

Getting OpenPose installed on the Jetson Nano was mostly smooth, but had some hiccups on the way from errors in the installation guide that I was able to solve with the help of some other groups that installed on the Xavier. I was also able to install OpenCV which went smoothly. After installing OpenPose, I tried to get video streaming working to test the FPS we would get after finally getting our camera, but I had difficulties getting that setup. Instead, I just experimented with running OpenPose in a similar fashion as Sung had been doing on his laptop. Initial results are not very promising, but I am not sure if OpenPose was making full use of the GPU.

Next week is spring break, so I do not anticipate doing much, but after break I hope to continue to work on the Nano and begin OpenCV + glove part.