Aishwarya’s status report for 2/12/22

I put together the initial starter code for hand detection and created a Git repo for it so that our team could experiment with what kind of data could be retrieved, its format, and limits on factors such as distance from the camera, hand movement speed, etc. This helped us verify that mediapipe is an appropriate tool for our project. I also collected research for our design documents related to different types of neural networks (convolutional, recurrent, etc.), and how we could go about formatting the input data (through feature extraction) to our NN. This helped drive our design meetings following completion of the project proposal presentations on Wednesday. In addition, I researched the various tools AWS offers to help streamline the development process, and our team is considering using Amazon Amplify and Amazon Sagemaker.

Next week, I hope that we can split up the different signs among the three of us, so that we can create ASL video data. I also hope to finalize the type and structure of our neural network(s) after completing our research on what would be the best approach with the greatest potential of maximizing prediction accuracy and minimizing model execution time. This way we can make more progress on designing an implementation for feature extraction and input data formatting in order to be compatible with the requirements of the neural network(s).