Malcolm’s Status Update for 11/21/20

This week i have been playing a lot of catch-up with my part of the data management of the project. One of our big challenges is efficient learning of the neural network, so having the data in a place to do this is of major importance. As such, we are using AWS to host both this and the network. So with this in mind I have made progress towards building up the repositories and systems we need in order to host both of these services. There are still parts of it I am unclear on and need to figure out. Otherwise I need to take time to keep understanding possible architectures for our neural network, as this is still a large part of the project that needs to be implemented. I believe our plan is to work together over the coming weeks to put our individual parts of the project together and then use them to configure and finish designing the network, as well as completely train it.

Young’s Status Update for 11/14/20

This week I updated the neural network to be a fully learning CNN. I’ve implemented a couple different CNNs, namely a mini CNN that takes in 28×28 pixel inputs and outputs a likelihood over the different possible classes, as well as the AlexNet that takes in larger image sizes and has more filters and larger convolutions with the same output dimensions. I’m having some syntax bugs with the SoftMax layer at the moment which are confusing me but as soon as that gets fixed, I have a sample data set to test out the miniCNN on and check that it can learn. As of now I’ve decided to use the same network and input dimensions for both static and dynamic images. Afterwards, I’ll begin to run the AlexNet on the ASLLVD or Boston RWTH set for static images and resume feature engineering modules.

Team Status Update for 11/14/2020

This was mostly spent preparing for our interim demo that occurred this week. We had had brief meeting to see where we stand after receiving the feedback and plan to meet again this weekend to plan to solidify our schedule going forward to hopefully build a working prototype of our project so we can refine what we have. We have also uploaded the web app code to github so that we can all contribute and start combining our components. In our demo we clarified that we feel a little behind so our will be to help get back on track and mitigate any risk to the project, as well and scheduling next step for the coming weeks.

Aaron’s Status Update for 11/14/2020

This week was our interim demo so I spent most of my time making sure my component of the project was up and running to be presented. Apart from that I created a github repo for the web app, uploaded the code and we all have access as contributors. I also may have figured out and easier and better way to get the feed from the tablet camera that would reduce latency, so I’ll be looking into that. Next steps are to work on the feedback we received from the interim demo and and start the work of combining our components so we can develop a working prototype.

Young’s Status Update for 11/7/20

This week was unexpectedly very exhausting and difficult for me all of a sudden, so I have not yet made progress since Monday, but I will work on the project tomorrow and make updates here accordingly.

Malcolm’s Status Update 11/07/20

This week I spent time working with AWS and getting our data ready to be used for learning in the cloud. So the majority of my effort went towards understanding the API’s that would be used as well as the specific services we need to correctly employ the service for our project. Outside of this, I continued researching and preparing ideas for the neural network since that step will need the most work at this point. We plan on using the network to train on all the data and run multiple iterations of the network at once to discover the best architecture.

Team Status Update for 11/7/2020

This week we have been gearing up to combine our individual components into a working demo, we are a little behind in the integration as we have had to do a lot of learning and troubleshoot but we seem to be making progress nonetheless. The web app is able to run and accessed from the amazon tablet and is currently running a basic face tracking algorithm for testing. The goal is to transition to hand tracking and feature extraction and moving forward from that; integrate the neural network and get basic recognition working. To mitigate the risk of not having a viable product in the end, we may have to cut features we originally planned on having such as voice recognition and perhaps reduce the number dynamic signs interpreted while we get the static signs working. Hopefully it doesn’t come to that but we reach the point where we have to stat thinking about these decisions.

Aaron’s Status Update for 11/7/2020

This week after completing the tutorial to get the tablet camera running with OpenCV in the Django app I cleaned up the code and got it running on the tablet. I tested this by using a basic face tracking library as seen in the photo on this post. I also did some research into setting up the feature extraction for hand detection. Next steps involve adding the code to a github repo for easy version tracking and collaboration, adding hand tracking and improving the app interface. I am a little worried about the time we have left to integrate all the components we are working on but if we are able to get hand tracking work quickly I think we should be able to integrate the neural net to get something up and running.

Young’s Status Update for 10/31/2020

I got more work done this week in designing the neural network for general inputs of size m by n. As of now we have a neural network without convolutional layers, with two Dense layers with ReLU activations, Adam optimizer, and Multinomial cross entropy loss. This is the base network we will use for both static signs and dynamic signs, although the specifics of the hyperparameters and the layers will have to differ based on the performance we observe on the two classes. I’ve implemented the skeleton of the training process using the tensorflow graph flow model and will spend the rest of the week figuring out adding convolutional layers and returning to the feature extraction step to begin the dynamic gesture classification process.