This week, I personally worked on making 30 iterations of each of our 15 dynamic, communicative signs. I also went through the WLASL database for dynamic signs and got all the video clips of training data for the 15 signs. In doing this, I realized that a lot of the videos listed in the dataset no longer exist, meaning that we will have to both augment the existing videos to get more data and potentially use the testing data I have made as training data. In addition to working with this data, I have been doing some research into working with the AWS EC2 instance, image classification after landmark identification through MediaPipe, and methods for augmenting data.
My progress is currently on schedule, however in deciding that we will need to also create training data for the dynamic signs, we have some new tasks to add, which I will be primarily responsible for. In order to catch up on this, I will be putting my testing data creation on hold to prioritize the dynamic sign training.
In the next week, I plan to have 50 videos of training data per 15 dynamic signs, where the 50 will be combination of data I have created, data from WLASL, and augmented videos. Additionally, I plan to help Aishwarya with model training and work on the instructional web application materials.