The most significant risks that could currently jeopardize the success of our project is the integration of the machine learning model with the webapp, where we want to make sure the user’s video input is accurately fed to the model and that the model prediction is accurately displayed in the webapp. Currently, this risk is being managed by starting integration a week earlier than planned as we want to make sure this resolved by the interim demo. As for a contingency plan for this risk, we will have to consider some alternative methods of analyzing the user input with our model, where a simpler approach may trade performance for better integration.
As for changes in our project, while the design has remained relatively the same, we realized the some of the data for ASL certain letters and numbers in the training dataset look different than traditional ASL, to the point where the model was not able to recognize us doing certain signs. As the goal of our project is to teach ASL to beginners, we want to make sure our model accurately detects the correct way to sign letters and numbers. Thus, we handpicked the signs that were most inaccurate in the training dataset and created our own training data by recording ourselves doing the various letters, and extracting frames from that video. The specific letters / numbers were: 3, e, f, m, n, q, t. While the cost of this change was increased time to make the training data, it will help the accuracy of our model in the long run. Additionally, since we plan to do external user tests, the fact that we are partially creating the training data should not affect the results of our tests as we will have different users signing into the model.
Our schedule remains mostly the same except that we will be starting our ML/Webapp integration a week earlier and that this week, we have tasks to create some training data.