Jeff’s Status Report For 04/25

Hello,

This week I got my computer back and life is much better. I have been able to work on progress on regetting the Django web application deployed to AWS. It is still proving to be a little tricky.

As for OpenCV hand recognizing and NN classifier, I tested the performance and realized the simploy normalizing the image had very poor accuracy (<60 percent), and as a result the classifier would also be very bad. Thus, given Sung’s SVM is some good stuff, we have decided to abandon the secondary classifier.

But, instead I am now using OpenCV  to generate different lighting conditions given our test image. By using Gamma correction, we can now artificially simulate different lighting conditions, to test our classifier. Also, I am experimenting with using OpenCV to resize images, as smaller images have better OpenPose performance, seeing how small we can go before it stops working.

Next week, I plan to finally wrap up the web application deployment and hopefully smash my old computer for Marios.

Jeff’s Status Report For 04/18

Hello,

Unfortunately, my new computer has still not yet arrived and has been delayed until Monday. So I’ve been continuing to work on my Jetson Nano. I have been continuing to work on the OpenCV aspect of the project, both through SSHing Andrew machines and local machines to test real speed on Nano.

In addition, I am setting up my own AWS account so we can have more instances of EC2 running to better generate training data.

Jeff’s Status Report For 04/11

This week I prepared for our demo, getting the OpenCV hand recognizer and normalizer working, while working out kinks in deploying the web application.  There remains problems linking the channel layer of Django which is an EC2 instance of Redis.

However, I was not yet able to solve these yet, as unfortunately my computer died spectacularly and I am forced to order a new one. :'(

Until then, I have just been getting my new Nano setup for OpenCV testing.

 

Jeff’s Status Report For 4/04

Hello from home!

This week, I focused on getting AWS set up and hosting our web application, switching from just local hosting. AWS set up took a while to figure out how to get the credentials, but the web application should be hosted, and now I am just doing some experimentation that all the features still work, ie we can still connect via websockets and send data through. In addition, I’m continuing to tinker with the UI/UX, adding some features like parsing the message to add some more visual elements like emojis to the currently only text based output.

In addition, for the OpenCV part after doing some further investigation and talking it over with Sung, we decided that OpenCV would be better suited for use in normalizing images for use in Sung’s image neural network classifier. Even after adding different patterns and colors as suggested during last week’s meeting, we realized there was simply OpenCV with a glove simply did not have the capability to reproduce OpenPose’s feature extraction by itself. Too often in our gestures were markers covered by our gesture, and without the ability to “infer” were the markers were, OpenCV could not reproduce the same feature list as OpenPose requiring a new classifier. Instead we are using OpenCV to just extract the hands and normalize each image so that it can be passed in. I have worked on extracting the hands, and I am continuing working to normalize the image first by using the glove with a marker to standardize the image, and hopefully eventually without a glove.

Jeff’s Status Report For 03/28

Hello World!!!! hahaha am i funny too

This week I continued to work on refining the design of our web application. In addition, I began to focus more heavily on the OpenCV glove tracking component. To start rather than use all 20 markers that OpenPose has, I am focusing initially on 6, 5 on the palm  knuckles and 1 on the wrist. (the glove is a clear latex glove). I was able to get overall body and hand detection and the segmentation of the body and hand from the background working, although it is not perfect at this time and I will continue to work on it. Furthermore, when doing the markers, I experimented with different ways of doing it from creating square markers and doing edges, but the markers lacked a clear enough edge for it to work super well, as well as using SURF, which did a sort of blob detection, but performance was not super great, but the markers lacked enough contrast. Finally, the best solution so far was to use color detection to detect the color of the markers which was red, which worked for the knuckles but not so much of the wrist. The color of my hand is kind of pinkish, which caused less ideal results on the wrist marker. Hopefully, when the green marker ordered comes, performance will be much better.

My SOW and Gantt Chart was emailed to Emily earlier this week!

Jeff’s Status Report For 03/21

This week, I worked with Claire and Sung on refocusing our project so that we could work with little physical interaction. We also fixed our gesture list after noticing from before Spring Break that some gestures that did not show the wrist or showed only the back of the hand were not recognized by OpenPose.

Furthermore, I continued to work on the web application. I set up Docker to run Redis, which will allow the channel layer. This will allow multiple consumers to connect to the web socket and send information, ie the command that our algorithm has recognized as well as the response from Google Assistant SDK.

In addition, I began to get myself familiarized with OpenCV which is to be used in conjunction with our designed glove for a less computational intensive alternative to OpenPose. I began experimented with OpenCV and marker tracking, which is something I will continue next week. The glove currently is simply a latex glove with a marker indicating the key points. I may switch to using a more permanent marker like tape in the future.

Jeff’s Status Report For 03/07

This week I continued to work on the web application, working again on setting up the channel layer and web socket connections. I also decided to work more on setting up the Jetson Nano to run OpenPose and OpenCV, as finalizing the web application was less important than catching up to the gesture recognition parts of the project.

Getting OpenPose installed on the Jetson Nano was mostly smooth, but had some hiccups on the way from errors in the installation guide that I was able to solve with the help of some other groups that installed on the Xavier. I was also able to install OpenCV which went smoothly. After installing OpenPose, I tried to get video streaming working to test the FPS we would get after finally getting our camera, but I had difficulties getting that setup. Instead, I just experimented with running OpenPose in a similar fashion as Sung had been doing on his laptop. Initial results are not very promising, but I am not sure if OpenPose was making full use of the GPU.

Next week is spring break, so I do not anticipate doing much, but after break I hope to continue to work on the Nano and begin OpenCV + glove part.

Jeff’s Status Report For 02/29

This week, I worked on the Design Report. I also began making more progress on the web application finalizing some design choices and creating a rough prototype.

The key design and overall of the web application is to emulate the Google Assistant on the phone, which displays visual data of the Google queries in a chat type format.  The “messages” would be the responses from the Jetson containing the information. We are still experimenting with Google Assistant SDK to determine what exact information that is received, but at minimum the verbal content usually stated.

In addition, do to the nature of this application it is important that the “messages” from the Jetson with the appropriate information be updated in real time, ie eliminating the need to constantly refresh the page for new messages to occur. To do this, I decided on using Django channels, which allow asynchronous code and handling HTTP as well as Web Sockets. By creating a channel layer, consumer instances can send then information. The basic overall structure has been written, and I am currently now in the process of experimenting with finishing up the channel layer and experimenting with using a simple python scripts to send “messages” to our web application.

Jeff’s Status Report of 02/22

This week in preparation for our Design Review Presentation, I worked with Sung and Claire on preparing our slides and finalizing design decisions.

As a part of that, I worked with Claire on choosing the static gestures that we would use for our MVP. Following the advice from our Project Proposal presentation, we choose to modify fingerspelling gestures as many gestures we noticed were very similar or identical. We then remapped the similar gestures into something more distinguishable, and also the dynamic gestures (ie Z) to unique .static gestures. In addition, we also made the unique gestures for popular commands (ie whats the weather).

Furthermore, I worked with Sung on helping to finalize the machine learning implementation we would use. Based on some research on other group’s performances of running OpenPose on Xavier resulting in 17 fps, we found it unlikely that it would run on the Nano, making us pivot to using AWS combined with running OpenCV on Nano. In addition, we prototyped a glove we would use for our OpenCV implementation. The glove was simply a latex glove with a marker marking the joints (ie knuckles). We then did some testing with OpenPose to ensure that the glove would not hinder OpenPose, as well as the effect of distance on OpenPose as well as other factors (ie is face required). We found that the glove did not interfere with OpenPose, that the face is needed for OpenPose to work, and that at roughly around 3 meters OpenPose has difficulty detecting hands (using a 1.2 megapixel camera on a Macbook).  We also setup OpenCV and began basic experimenting.

Finally, I continued to make some progress on the Web App, but given the Design Review, I choose to focus more on finalizing design decisions this week, resulting in less progress in the Web App, but with the slack allocated it should be okay.