Team Status Report for 2/19/22

This week, our team finalized the type of neural network we want to use for generating ASL predictions. We gathered more research about tools to help us with model training (e.g. training in an EC2 instance) and planned out the website UI more. We worked on creating our database of ASL test data, and worked on the design report.

The most significant risks right now are if our RNN does not meet requirements for prediction accuracy and execution time. In addition, the RNN will require a large amount of time and data for training. If we increase the number of layers or neurons in an effort to improve prediction accuracy, this could increase training time. Another risk is doing feature exraction without enough efficiency. This is critical because we have a large amount of data to format so that it can be fed into the neural network.

To manage these risks, we have come up with a contigency plan to use a CNN (which can be fed frames directly). For now, we are not using a CNN because it’s performance may be much slower than that of an RNN. For feature exraction, we’re considering doing this in an EC2, so that our personal computer resources are not overwhelmed.

A design change we made was the groupings for our signs (to have separate RNNs for each). Before, we grouped simply by category (number, letter, etc.), but now, we are grouping by similarity. This will allow us to more effectively distinguish if the user is doing a sign correctly, and detect minute details that may affect this correctness.

There have been no changes to our schedule thus far.

Aishwarya’s Status Report for 2/19/22

This week, I worked with Tensorflow to gain familiarity with how it allows us to instantiate a model, add layers to it, and train the model. I also experimented with how we would need to format the data using numpy so that it can be fed into the model. Feeding dummy data in the form of numpy arrays to the model, I generated a timing report to see how long the model would take to generate a prediction for every 10 frames processed in mediapipe (during real-time video processing), so that we could get an idea of how the model’s structure impacted execution time.

Our team is also working on creating a test data set of ASL video/image data, so I recorded 5 videos for each of the signs for numbers 0-4 and a-m and uploaded them to a git repo we are storing them in.

The exact network structure that would optimize accuracy and execution time still needs to be determined, but this must be done through some trial and error. We will be using at least one LSTM layer, followed by a dense layer, but knowing the exact number of hidden layers and their number of neurons will be more clear after we have the chance to measure performance of the initial model structure and optimize from there.

Our progress is on schedule. Next week, I hope to complete the feature extraction code with my partners (both for real-time video feed and for processing our training data acquired from external sources).

Valeria’s Status Report for 2/19/22

This week I worked on some Figma frames to help represent how our web app is going to look and have a clear idea of what we want to put in the HTML templates. I also created another GitHub repository to put all of our images and created the folders for each sign. Since we needed to get started this week in building our testing database, I started taking pictures of signs for letters N to Z and numbers 5 to 9.  I took 5 pictures for each sign and you can see a sample of what the pictures looked like here.  The main idea was to have different angles for the photo to build the neural network in recognizing a sign at any angle. Apart from that, I looked into the possibility of building a neural network inside an EC2 instance since we found through research that building this network with our computers can potentially make them crash. I did find that it is possible but that we might need a GPU instance, which is something to consider. Lastly, I’ve spent the majority of this week, along with Aishwarya and Hinna, working on the design presentation.

Currently, our progress is mostly on schedule. However, we are currently slightly behind on the web app since we have taken priority to machine learning this week. Because we are currently behind on the web app, I’m planning on working on it this next week and not focusing as much on machine learning so that we do have the HTML templates set up. Luckily the Figma frames do help immensely on what elements to add to the pages so it shouldn’t take me more than a week to finish this up. For next week, I hope to accomplish finishing up the HTML templates and have all the pages set up with minimum actions like moving from the home page to a course module, etc. I also hope to continue building the testing database for my assigned signs (N-Z, 5-9) with at least 10 pictures per sign.  Apart from that, I would also be helping finish up our design paper.

Team Status Report for 2/12/22

This week, our team gave the proposal presentation. We met twice before the presentation to practice what we were going to say. As part of our preparation for the proposal presentation, we also created a solution block diagram to visualize the main components needed for our project. Furthermore, we created a visualization of the different modes for our web application (training vs testing). After our proposal presentation, we met on Friday to discuss how we were going to design our machine learning model. We were researching what were the best types of neural networks to use for both images and videos to label them with a correct prediction. We discussed the limitations of convolutional networks and looked more into recurrent neural networks. We also discussed how we might want to approach feature extraction (modifying the coordinate points from the hands into a more useful set of distance data). Distance data may allow us to have greater prediction accuracy than raw image inputs, which can have interference from background pixels. Currently, our most significant risk is incorrectly choosing the neural network, as well as having our models not be accurate enough for users. Another potential risk is incorrectly processing our images during feature extraction leading to latency and incorrect predictions. Our current risk mitigation is that we are researching the best neural network. But we have decided that worst-case scenario we would choose convolutional neural networks, which would allow us to simply feed the images themselves as inputs with the consequences, however, of lower accuracy and more latency. Lastly, a potential worry is that we need to start training soon but our design is still in progress, so we have firm time constraints to keep in mind.

Valeria’s Status Report for 2/12/22

This week I made the base for our web application. I set up a Github repository for the team to access our application where all the Django files are located, as well as a separate folder for our HTML templates.  I did research on what is the best camera to use for our computer vision and which one would help with our machine learning model. From what I found, the best one that is within our budget is the Logitech C920 camera since it has 30 fps. The 30 fps is going to help us when we are creating our neural network for our moving signs in our platform. Apart from that, I have also been researching neural networks, like the rest of our team, and trying to decide which one would be best for our project. From what I am currently finding, it seems that using two different neural networks, one for moving and one for static signs, can help us in the long run.

Our project is currently on schedule. For next week, I hope to finish my research on neural networks and finalize our design for the machine learning part of our project. Furthermore, I hope to get the HTML templates set, with a very basic layout of what the app is going to look like, and also start listing the functionality that we might need from AJAX. Apart from that, I would also be helping finish up our design presentation and paper so I do want to finish the design presentation slides by next week.