Young’s Status Update for 12/6/20

This week I set up AWS and got it running code for the first time. Struggled a lot initially with memory issues and figuring out how to set up instances, but was very satisfied with the result. I got to test out the different models I had prepared on the Kaggle ASL dataset with different parameters and found that the best model returned a surprising 95% validation accuracy rate.  Fine tuning the end layers after freezing the model from transfer learning resulted in some accuracy problems, but since most of the models did not take too long to run, I decided to train all of the model parameters over the night and found much better results. I also found that since the size of the dataset was so large (100,000 total images I subset from), training with even a few epochs returned very strong results. Having saved these model weights and model json files, I just need to export them into the web app and be able to run the classifier there. For the first time, there’s light at the end of the tunnel.

Aaron’s Status Update for 12/6/2020

This week I have been working on the trying to get a working prototype of our project up. I have updated the UI, integrated background subtraction and face removal into the video stream from the tablet. Most just getting things ready for the integration of the the neural network. I have also added a form that I am working on, in order to tell the app what sign we are trying to interpret as a part of the descoped version of the project. At this point we are just trying to get something that works and is presentable.

Malcolm’s Status Report 12/6/20

This week contained a lot of progress with background subtraction, bounding box experimentation and general debugging. Much of the progress dealing with the background subtraction was in terms of the implementation. Using background subtraction itself and image thresholding gave different results. However since background subtraction just captures the edges of a profile, the resulting image will have a smaller space necessary to represent it. So this method is helping our NN grow. We have also now decided to use a dedicated space for the “bounding box” on our hand, since the spatial tracking of the hand proved near impossible. I made attempts at it by using structural similarity and color profile-based searching in the image but the results were less than ideal. Now I am working towards fully implementing our pieces of the project and debugging.

Team Status Update for 11/21/2020

This week has been a lot of work trying to figure out how to combine the individual component of the project that we are all working on. We think we have figured out a way to integrate the neural network into the web app, so on that front it is just a matter of finishing the training on our final dataset, testing and then integration. the same goes for aws integration as well. Hand tracking is also a crucial component of the project that is still unimplemented but is being researched and worked on now. Once we figure out the next step of finishing our components and combining them I think we will be in a good place, but for now its going to be a lot of work to mitigate risks and finish the project.

Aaron’s Status Update for 11/21/2020

This week we met up to try to figure out how to consolidate our individual component for asli. I did some research into integrating Young’s work on the neural net we are training for gesture recognition into our web app. I think we were able to find a way to do so without to much hassle so now its just a matter of getting it up and running so we can integrate it. I also created to new branches for the development of hand tracking into the web app and that’s current what I’m working on figuring out how to do, the other branch is for a different method of capturing the tablet webcam video stream with lower latency. So next step for me are to get them working with priority on hand tracking as that feature is part of our core functionality. I believe we on schedule but we are on a tight deadline.

Team Status Update for 11/14/2020

This was mostly spent preparing for our interim demo that occurred this week. We had had brief meeting to see where we stand after receiving the feedback and plan to meet again this weekend to plan to solidify our schedule going forward to hopefully build a working prototype of our project so we can refine what we have. We have also uploaded the web app code to github so that we can all contribute and start combining our components. In our demo we clarified that we feel a little behind so our will be to help get back on track and mitigate any risk to the project, as well and scheduling next step for the coming weeks.

Aaron’s Status Update for 11/14/2020

This week was our interim demo so I spent most of my time making sure my component of the project was up and running to be presented. Apart from that I created a github repo for the web app, uploaded the code and we all have access as contributors. I also may have figured out and easier and better way to get the feed from the tablet camera that would reduce latency, so I’ll be looking into that. Next steps are to work on the feedback we received from the interim demo and and start the work of combining our components so we can develop a working prototype.

Team Status Update for 11/7/2020

This week we have been gearing up to combine our individual components into a working demo, we are a little behind in the integration as we have had to do a lot of learning and troubleshoot but we seem to be making progress nonetheless. The web app is able to run and accessed from the amazon tablet and is currently running a basic face tracking algorithm for testing. The goal is to transition to hand tracking and feature extraction and moving forward from that; integrate the neural network and get basic recognition working. To mitigate the risk of not having a viable product in the end, we may have to cut features we originally planned on having such as voice recognition and perhaps reduce the number dynamic signs interpreted while we get the static signs working. Hopefully it doesn’t come to that but we reach the point where we have to stat thinking about these decisions.

Aaron’s Status Update for 11/7/2020

This week after completing the tutorial to get the tablet camera running with OpenCV in the Django app I cleaned up the code and got it running on the tablet. I tested this by using a basic face tracking library as seen in the photo on this post. I also did some research into setting up the feature extraction for hand detection. Next steps involve adding the code to a github repo for easy version tracking and collaboration, adding hand tracking and improving the app interface. I am a little worried about the time we have left to integrate all the components we are working on but if we are able to get hand tracking work quickly I think we should be able to integrate the neural net to get something up and running.