Hinna’s Status Report for 3/26/22

This week, I worked with Aishwarya to test the initial model we have for static signs (1-finger, 2-finger, 3-finger, fist, etc) and discovered some discrepancies in the training data for the following signs: 3, e, f, m, n, q, t. As a result, I (along with my other two group members) created some additional training data for these signs in order to retrain the model to detect the correct version of them. 

Additionally, I worked on the normal testing data that I had assigned for this week (letters a-m, numbers 0-4), in accordance with the group schedule. I also began brainstorming ways to account for choosing the highest model prediction, as each of the possibilities in our model add up to 100 rather than being out of 100 individually. This means that we cannot specify a specific range of prediction values for deciding the best one, as we previously thought, but instead will be grouping any unrecognized movements by the user into an “other” class to ensure that the near real-time prediction is accurate.

Furthermore, for the interim demo, we are brainstorming some final aspects of the webapp such as the most intuitive way to display feedback to the user and having easy to understand instructional materials. As part of the instructional materials, I created text blurbs for all 51 signs that specify how to do each sign (along with the video) as well as certain facts about the sign (i.e. “help” is a directional sign where directing it outwards indicates giving help while directing it towards yourself indicates needing/receiving help).

At the moment, our project is on schedule with the exception that we are beginning integration a week early and that we have to account for some extra time to make the training data from this week.

As for next week, I plan to continue making testing/training data, work with Valeria to integrate the instructional materials into the webapp, and prepare for the interim demo with the rest of my group.

 

Aishwarya’s Status Report for 3/26/22

I trained models for 4 of our model groups (1-finger, 2-finger, 3-finger, and fist-shaped). With testing these, we noticed some unexpected behavior, particularly with the 3-finger model, and realized that the training dataset had incorrect samples for letters such as M and N. I, along with my other group members, recorded videos to create new data that would replace these samples. I wrote a script to extract frames from these videos to store as jpg images, allowing us to generate a few thousand images for the labels that needed to have their samples replaced.  Due to these issues we discovered with the datasets, I will need to reformat the training data and retrain some of the models with these newly created samples.

Our progress is on schedule. During this next week, I hope to integrate the web app video input with the model execution code in preparation for our interim demo. I will also complete re-parsing the data with our new samples for training and retrain the models.

The video linked is a mini-demonstration of one of my models performing real-time predictions.

 

 

Team Status Report for 3/26/22

The most significant risks that could currently jeopardize the success of our project is the integration of the machine learning model with the webapp, where we want to make sure the user’s video input is accurately fed to the model and that the model prediction is accurately displayed in the webapp. Currently, this risk is being managed by starting integration a week earlier than planned as we want to make sure this resolved by the interim demo. As for a contingency plan for this risk, we will have to consider some alternative methods of analyzing the user input with our model, where a simpler approach may trade performance for better integration.

As for changes in our project, while the design has remained relatively the same, we realized the some of the data for ASL certain letters and numbers in the training dataset look different than traditional ASL, to the point where the model was not able to recognize us doing certain signs. As the goal of our project is to teach ASL to beginners, we want to make sure our model accurately detects the correct way to sign letters and numbers. Thus, we handpicked the signs that were most inaccurate in the training dataset and created our own training data by recording ourselves doing the various letters, and extracting frames from that video. The specific letters / numbers were: 3, e, f, m, n, q, t. While the cost of this change was increased time to make the training data, it will help the accuracy of our model in the long run. Additionally, since we plan to do external user tests, the fact that we are partially creating the training data should not affect the results of our tests as we will have different users signing into the model. 

Our schedule remains mostly the same except that we will be starting our ML/Webapp integration a week earlier and that this week, we have tasks to create some training data.

 

 

 

Valeria’s Status Report for 3/26/22

This week I was able to make the website automatically stop recording the video once 5 seconds have occurred. I connected all the pages together and can now move from page to page cohesively. Here is the link to watch a walkthrough of our website. This week I also did 10 videos for each of the dynamic signs e.g. conversation and learning categories. Furthermore, I did research into how we can send the Blob object that we are creating for the video and send it to our machine learning model to help with our integration stage.

From the research that I found, there is the possibility of sending the Blob itself to the machine learning model and having it be created into an object URL inside the model. Another idea that we found was to automatically store the video locally and have the machine learning model access it locally. While this would work, it would also not be efficient enough for what we want to accomplish. However, we realized that with time constraints this might be our fallback plan.

As of right now, my progress is on schedule. For next week, I hope to get the integration between the machine learning model and the website to be working. I also hope to create another HTML template with its associated AJAX actions to calibrate the user’s hands for MediaPipe and also get the user’s preference in hand dominance. Apart from that, I want to get the instructional videos done for the alphabet page.

Aishwarya’s Status Report for 3/19/22

I completed the code to parse the data for images and videos, passing them through MediaPipe and extracting and formatted the landmark coordinate data. The rough table below shows my initial findings for training and testing accuracy using a dataset for letters D,I,L, and X with 30 images per letter class. Over varying parameters to see how this affected the testing accuracy, the best test accuracy I could achieve was 80.56%. Overall, this seems to be an issue with overfitting (expecially since this initial data set is small).

Another dataset was found with 3000 images per letter class (though many of these fail to have landmark data extracted by MediaPipe). With using this dataset, overfitting still seemed to be an issue, though the model seems to perform well when testing in real time (I made signs in front of my web camera and found it to identify them pretty accurately). During this realtime evaluation, I found that it worked for my left hand. This means I will need to mirror the images the correct way to train the models for each right-handed  and left-handed signs.

My progress is on schedule. To combat issues with overfitting during the next week, I will continue trying to train with a larger dataset, varying parameters, and modifying the model structure. By the end of next week, I hope to have the models trained for each ASL grouping.

 

Valeria’s Status Report for 3/19/22

This week I was able to finish all of the HTML templates for the web application. Currently, I only have a couple of the URLs working to be able to move around the pages and be able to check the templates, meaning that only the alphabet page and the letter A’s learn/test mode are linked with URLs. Furthermore, I have linked real-time video feedback into the web page and have the user download whatever video clip they record of themselves. The website is able to get the video once the user presses the “Start Recording” button. Once the user finishes doing the sign, currently they need to press the “Stop Recording” button for this video to be saved. Here is the link to a pdf showing the HTML templates that we currently have. Apart from that, this week I have also been helping a little bit with the machine learning models by helping Aishwarya test out the models and trying to figure out where it was going wrong.  As for my current testing database, I have added 10 more images for each of the signs that I have been in charge of for the past few weeks, leaving a total of 30 images for the signs N to Z and 5 to 9.

Currently, my progress is on schedule since I was able to catch up during spring break. My goal for the next week is to link the rest of the remaining pages e.g. numbers, conversation, and learning. I also hope to be able to have the program automatically stop recording after 5 seconds of the “Start Recording” button being pressed. Apart from that, I also hope to add 10 images for each of the new signs that I have been assigned, e.g. all of the conversational and learning dynamic signs.

Hinna’s Status Report for 3/19/22

This week, I personally worked on making 30 iterations of each of our 15 dynamic, communicative signs. I also went through the WLASL database for dynamic signs and got all the video clips of training data for the 15 signs. In doing this, I realized that a lot of the videos listed in the dataset no longer exist, meaning that we will have to both augment the existing videos to get more data and potentially use the testing data I have made as training data. In addition to working with this data, I have been doing some research into working with the AWS EC2 instance, image classification after landmark identification through MediaPipe, and methods for augmenting data.

My progress is currently on schedule, however in deciding that we will need to also create training data for the dynamic signs, we have some new tasks to add, which I will be primarily responsible for. In order to catch up on this, I will be putting my testing data creation on hold to prioritize the dynamic sign training.

In the next week, I plan to have 50 videos of training data per 15 dynamic signs, where the 50 will be combination of data I have created, data from WLASL, and augmented videos. Additionally, I plan to help Aishwarya with model training and work on the instructional web application materials.

Team Status Report for 3/19/22

Currently, the most significant risks of our project are the machine learning models for the different groups of signs. Specifically, some of the datasets we found to use for training data are not being picked up well by MediaPipe or are not good enough quality, so we are running into some issues with training the model. To mitigate these risks, we are looking for new datasets – particularly for the letters and number signs – and potentially going to be making our own training data for the dynamic signs, as these are the ones with the fewest datasets available online.  As for contingency plans, if we are unable to find a good enough dataset that works well with MediaPipe, we might forgo the usage of MediaPipe and create our own CNN for processing the image/video data.

There have not really been any changes to our system design over this past week. One potential change we have been discussing is the grouping of signs over various neural networks, where we might now separate static and dynamic signs rather than dividing signs by the hand shape. This is partially because our static signs are one-handed, with image training data whereas a lot of our dynamic signs are two-handed with video training data. This change was necessary because it makes classification for static signs easier as we can limit the number of hands detected in frame. There aren’t really any costs incurred by this change as we had not yet made models that were separated by hand shape.

Our schedule has also not really changed but we will be allocating some extra time to make the dynamic sign training data since we initially did not anticipate needing to do this.