Valeria’s Status Report for 2/12/22

This week I made the base for our web application. I set up a Github repository for the team to access our application where all the Django files are located, as well as a separate folder for our HTML templates.  I did research on what is the best camera to use for our computer vision and which one would help with our machine learning model. From what I found, the best one that is within our budget is the Logitech C920 camera since it has 30 fps. The 30 fps is going to help us when we are creating our neural network for our moving signs in our platform. Apart from that, I have also been researching neural networks, like the rest of our team, and trying to decide which one would be best for our project. From what I am currently finding, it seems that using two different neural networks, one for moving and one for static signs, can help us in the long run.

Our project is currently on schedule. For next week, I hope to finish my research on neural networks and finalize our design for the machine learning part of our project. Furthermore, I hope to get the HTML templates set, with a very basic layout of what the app is going to look like, and also start listing the functionality that we might need from AJAX. Apart from that, I would also be helping finish up our design presentation and paper so I do want to finish the design presentation slides by next week.

Aishwarya’s status report for 2/12/22

I put together the initial starter code for hand detection and created a Git repo for it so that our team could experiment with what kind of data could be retrieved, its format, and limits on factors such as distance from the camera, hand movement speed, etc. This helped us verify that mediapipe is an appropriate tool for our project. I also collected research for our design documents related to different types of neural networks (convolutional, recurrent, etc.), and how we could go about formatting the input data (through feature extraction) to our NN. This helped drive our design meetings following completion of the project proposal presentations on Wednesday. In addition, I researched the various tools AWS offers to help streamline the development process, and our team is considering using Amazon Amplify and Amazon Sagemaker.

Next week, I hope that we can split up the different signs among the three of us, so that we can create ASL video data. I also hope to finalize the type and structure of our neural network(s) after completing our research on what would be the best approach with the greatest potential of maximizing prediction accuracy and minimizing model execution time. This way we can make more progress on designing an implementation for feature extraction and input data formatting in order to be compatible with the requirements of the neural network(s).