After presenting the project proposal Monday, my group and I reflected on the questions and feedback we got, and prepared to start each of our parts of the project. I started doing further research into the machine learning algorithm that will recognize ASL gestures. Since my teammate will be processing the datasets using OpenCV, I will begin by using publicly available datasets that provide preprocessed images for sign language recognition tasks. For example, ASL Alphabet Dataset on Kaggle and Sign Language MNIST on Kaggle. Since we decided to use TensorFlow and Keras, I looked into how existing projects utilized these technologies. In regards to training the neural network, I learned that convolutional neural networks (CNNs) or recurrent neural networks (RNNs) are commonly used. However, 3D CNNs are also used for image classification, especially with spatiotemporal data. Hybrid models combining CNNs and RNNs might also be a good approach.
Our progress is on track relative to our schedule. During the next week, Ran and I will begin preparing a dataset. We will also allocate some time to learn ASL so we can use some of our own data. I also hope to do more research into the structures of neural networks and consider the best ones