Hinna’s Status Report for 4/23/22

Over this past week, I created tradeoff graphs based on metrics we found for model accuracy, which we graphed against the number of epochs used in training and testing respectively. In these graphs, we identified that the dynamic models are performing very well (93%+ accurate) which most likely has to do with the fact that we had to create our own training data for them. On the other hand, the 1-finger and open-hand models were performing pretty poorly (60-70% accurate). So, along with my teammates, I made more training data for those models to see if adding that would help improve their accuracy.

Additionally, as the dynamic models are now integrated into the webapp, I examined how well they were doing, testing them personally at various angles, distances (within 2 feet), and using both hands in order to see how accurate they were. I found that when doing the signs quickly, within one second, the prediction was not accurate but when doing it more slowly, the accuracy improved. This finding was also reflected in some of our user test results where we had 2 users test the platform on Friday.

Finally, I have been working with my teammates on the final presentation, where I have updated our schedule and project management tasks, altered our Solution Approach diagram to account for the number of neural networks we have, adjusted our user requirements based on changes made since the design presentation (i.e. our distance requirement lowered and our model accuracy requirement increased), adjusted the testing/verification charts, and finally included the tradeoff curves for testing & training accuracy vs the number of epochs.

Our project overall seems to be on schedule with a few caveats. One is that we are head of schedule in terms of integration as we finished that last week, so our initial plan of integrating until the very end of the semester is no longer the case. However, our model accuracy is not quite where it needs to be for every subset of signs, so given that we only have about a week left, the fact that we might not be able to get them all to our desired accuracy of 97% makes it feel like we are a little behind. Additionally, we held user tests this past week and only 2 users signed up (our total goal is 10 users), which means our testing is behind schedule.

As for next week, my main focuses will be getting more user tests done, finalizing the tradeoff curves in the case where our model accuracies are improved through the addition of more training data, and working on the final report, demo, and video.

Leave a Reply

Your email address will not be published. Required fields are marked *