Aaron’s Status Update for 12/6/2020

This week I have been working on the trying to get a working prototype of our project up. I have updated the UI, integrated background subtraction and face removal into the video stream from the tablet. Most just getting things ready for the integration of the the neural network. I have also added a form that I am working on, in order to tell the app what sign we are trying to interpret as a part of the descoped version of the project. At this point we are just trying to get something that works and is presentable.

Aaron’s Status Update for 11/21/2020

This week we met up to try to figure out how to consolidate our individual component for asli. I did some research into integrating Young’s work on the neural net we are training for gesture recognition into our web app. I think we were able to find a way to do so without to much hassle so now its just a matter of getting it up and running so we can integrate it. I also created to new branches for the development of hand tracking into the web app and that’s current what I’m working on figuring out how to do, the other branch is for a different method of capturing the tablet webcam video stream with lower latency. So next step for me are to get them working with priority on hand tracking as that feature is part of our core functionality. I believe we on schedule but we are on a tight deadline.

Aaron’s Status Update for 11/14/2020

This week was our interim demo so I spent most of my time making sure my component of the project was up and running to be presented. Apart from that I created a github repo for the web app, uploaded the code and we all have access as contributors. I also may have figured out and easier and better way to get the feed from the tablet camera that would reduce latency, so I’ll be looking into that. Next steps are to work on the feedback we received from the interim demo and and start the work of combining our components so we can develop a working prototype.

Aaron’s Status Update for 11/7/2020

This week after completing the tutorial to get the tablet camera running with OpenCV in the Django app I cleaned up the code and got it running on the tablet. I tested this by using a basic face tracking library as seen in the photo on this post. I also did some research into setting up the feature extraction for hand detection. Next steps involve adding the code to a github repo for easy version tracking and collaboration, adding hand tracking and improving the app interface. I am a little worried about the time we have left to integrate all the components we are working on but if we are able to get hand tracking work quickly I think we should be able to integrate the neural net to get something up and running.

Aaron’s Status Update for 10/31/2020

This week I spent time working on integrating a live webcam feed into our Django app this is something that I’ve been struggling with for some time but I think I finally a made a break through. I found a tutorial that presented a way to use a live feed from the tablet webcam using an android app called IP webcam. This app allows you to connect to the webcam using its IP address and that feed can then be process through OpenCV. Next steps for this are to refine the code and work on hand detection so that we can connect and use our neural network to
recognize hand gestures.

Aaron’s Status Update for 10/24/2020

This week was midterm week for most of us and on top of that I was sick so not much progress was made. I’ve been struggling with the Django web app and trying to integrate OpenCV to get a live feed. I think I underestimated the learning and work involved to get it working. Antonis gave me some advice for how to accomplish by using the system command of the os library in order to run our python code
and that should make it run your command in a subshell. I’m still struggling to figure this out however so next steps are to ask for help as I believe at this point I am falling behind schedule with my component of the project. As for mitigating risk I have put the web app portion on of this on the backburner while I get the OpenCV functionality working. I hope by putting in extra hours and getting help this week that I can get back on schedule.

Aaron’s Status Update for 10/17/2020

Not much for me to report on this week, I am still working on OpenCV integration in our web app. I’ve been doing some learning of OpenCV to do the integration so it has not been as fast as I had hoped but we are still on track. One thing that was made apparent the design reviews this week was that we could simplify our design by hosting the web app on AWS. I don’t think will fundamentally change our approach, it just means that aside from testing we will not have to run our own server to deploy the web app on our tablets.

Aaron’s Status Update for 10/10/2020

This week I spent my time working with Django trying to get a web app up and running for our ASL interpreter. My goal was to get a web app up and running with the OpenCV api and make it accessible by our kindle fire tablets. I was able to get the server up and running but ran into an issue where you can’t run a publicly accessible server easily from WSL without doing some complicated network bridging. I tried to install Ubuntu on my computer as a workaround but that didn’t work. However I did find a method using the Termux on android that allows us to run the Django webapp straight off the tablet itself. Overall I think I made some headway but the setback didn’t allow me to make as much progress on the app as I wished. SO I’d say were a little behind on the Open CV integration but still relatively in line with our entire project schedule. By next week I hope to have a working Open CV integration in the app with a video feed preview.

Aaron’s Status Update for 10/03/2020

This week as we are still in the preliminary stages of our project we decided to take the week to familiarize ourselves with Machine Learning/Deep Learning before we attempt to start implement our own neural networks for ASL recognition. We decided to go through the lectures from 10-601 Introduction to Machine Learning and go through the lectures for Regularization, Neural Networks, Backpropagation, and Deep Learning. So I went through and watched the lectures to try and get at least a base level understanding of the concepts. Some general things I took away was that we should avoid overfitting by making sure that our model doesn’t capture noise in our training data instead of the underlying features of the data and that more data points than features helps model behave and regularization helps when you can’t collect extra data. After this I think I still have some studying to do on deep learning concepts but I understand how were going to go about training our model a little better now.

Lectures page: (https://scs.hosted.panopto.com/Panopto/Pages/Sessions/List.aspx#folderID=%229044a1d8-bf2d-4593-b478-a9d100e8a09f%22)