Aishwarya’s Status Report for 4/30/22

This week, I worked on the final presentation with my team, taking pictures and documenting the current state of our project. I worked on adding details to our demo poster (such as system block diagrams and discussion regarding overall system structure). I also have been working on experimenting with further data collection concerning our neural networks (e.g. observing the effect of learning rate on model accuracy), to see if these metrics could provide more content for our discussion of quantitative results and tradeoffs.

My progress is on schedule. Next week I hope to complete my portion of the video demonstration (explaining the ML models and testing accuracy metrics), as well as my portions of the final design report.

Hinna’s Status Report for 4/30/22

This week, I finished working on the final presentation with my group, where I added some walkthroughs of the webapp (attached to this post). I also started working on the final poster where I wrote out our product pitch, some of the system description, and worked with my group members to do the rest. Additionally, I began writing the final report, making adjustments to the introduction, user requirements, and project management based on changes we made since the design review.

Our project is on schedule as of now, where we mostly need to finish working on solution tradeoffs, user testing, and  final deliverables. Over the next week, I will work with my team to finish user testing, the final poster, the final video, and finalize plans for the final demo. I will also work on analyzing some of our tradeoffs (i.e. epochs vs accuracy for each of the models).

Valeria’s Status Report for 4/30/22

This week I worked on practicing for our final presentation. Apart from that, I worked with Hinna and Aishwarya in finishing up our poster and in doing our demo video. I have also been working on creating a profile page for the user so the user may be able to see what their best and worse signs are.

My progress is on schedule. For next week, my goal is to finish up my audio recordings for the demo video in explaining the purpose of our project. Another goal for next week is to finish up editing and adding information to the sections I was assigned for the final report.

Team Status Report for 4/30/22

The most significant risk for the success of this project is model tuning, meaning we don’t achieve the accuracy that we aimed for before the final demo. To mitigate this risk, we are continuing to train our models by adding more training data. As for contingency plans, we are going to leave it as it is since there’s only a week till the demo. Also, after talking to Professor Gormley, the machine learning professor at CMU, he suggested that we should not change our neural network structures due to time constraints.

There have been no changes to the existing design of the system and to our schedule.

Hinna’s Status Report for 4/23/22

Over this past week, I created tradeoff graphs based on metrics we found for model accuracy, which we graphed against the number of epochs used in training and testing respectively. In these graphs, we identified that the dynamic models are performing very well (93%+ accurate) which most likely has to do with the fact that we had to create our own training data for them. On the other hand, the 1-finger and open-hand models were performing pretty poorly (60-70% accurate). So, along with my teammates, I made more training data for those models to see if adding that would help improve their accuracy.

Additionally, as the dynamic models are now integrated into the webapp, I examined how well they were doing, testing them personally at various angles, distances (within 2 feet), and using both hands in order to see how accurate they were. I found that when doing the signs quickly, within one second, the prediction was not accurate but when doing it more slowly, the accuracy improved. This finding was also reflected in some of our user test results where we had 2 users test the platform on Friday.

Finally, I have been working with my teammates on the final presentation, where I have updated our schedule and project management tasks, altered our Solution Approach diagram to account for the number of neural networks we have, adjusted our user requirements based on changes made since the design presentation (i.e. our distance requirement lowered and our model accuracy requirement increased), adjusted the testing/verification charts, and finally included the tradeoff curves for testing & training accuracy vs the number of epochs.

Our project overall seems to be on schedule with a few caveats. One is that we are head of schedule in terms of integration as we finished that last week, so our initial plan of integrating until the very end of the semester is no longer the case. However, our model accuracy is not quite where it needs to be for every subset of signs, so given that we only have about a week left, the fact that we might not be able to get them all to our desired accuracy of 97% makes it feel like we are a little behind. Additionally, we held user tests this past week and only 2 users signed up (our total goal is 10 users), which means our testing is behind schedule.

As for next week, my main focuses will be getting more user tests done, finalizing the tradeoff curves in the case where our model accuracies are improved through the addition of more training data, and working on the final report, demo, and video.

Aishwarya’s Status Report for 4/23/22

This week, I completed integrating model execution with the randomized testing feature that Valeria created for the web app. The user proceeds through a set of mixed questions and the models execute on their inputs, so that scores are accrued in the background, and then presented to the user at the end in a score board format. Further, I resolved the bug from last week where the stop action triggered by the user or the timer was executing repeatedly, preventing the user from making further inputs. Now, the user can create multiple attempts at a sign without this bug hindering them.

I also gathered metrics for model training and testing accuracy vs number of epochs for training. This data will be included in our final presentation next week, and it also revealed that some of our models need additional data (created by us) to retrain and improve testing accuracy. Additionally, I conducted user tests with Valeria in order to obtain feedback about our platform, so that we may improve it further before final demo.

My progress in on schedule. The web app and the models are fully integrated. This next week I will focus on tuning the models and gathering more data (concerning model testing accuracy and execution time to generate a prediction) for our documentation of results.

Valeria’s Status Report for 4/23/22

This week I was able to accomplish getting the test module done. Users are now able to pick the desired topics that they want to be tested on and they would be given 10 random questions to be tested on. I was able to work with Aishwarya to deal with the integration of how to send and store if the sign was correct or not into our test module. Apart from that, I finished creating the results page and am now able to show to the users the results of their test in a clear and intuitive way.  I also helped do a bit more training data for some of our machine learning models that we are having accuracy trouble with and helped do the final presentation slides.

I am currently on schedule. For next week, I am giving the final presentation so I am going to spend some time this next week practicing for it. After that, my next goal is to work on the final poster and the final report to get that finished up before finals week starts.

Team Status Report for 4/23/22

The most significant risks that could jeopardize the success of our project currently are the model accuracies. Over the past week, we have been looking at accuracy tradeoffs and started conducting user tests, and with the dynamic models, we can see that when users sign them quickly, our model is not able to accurately detect the dynamic signs. To fix this, we are considering making training data of doing the signs faster so that the model can be trained on faster iterations of each sign. As a contingency plan, we will just tell the user to sign slightly slower or just keep the models as they are since we are nearing the end of the semester.

There haven’t been any changes to our system design. As for our schedule, we are going to extend our user testing weeks all the way up to the demo since we were not able to get enough users to sign up over this past week. Additionally, we plan to collect survey results at the live demo to get more user feedback to add to the final report. Also, because we have the webapp and ML models fully integrated, we are shortening the integration task on our schedule by two weeks.

Hinna’s Status Report for 4/16/22

This week, my main focus was user testing and also examining the accuracy of our static and dynamic models.

In regard to user testing, I made a google form survey that asks our testers to rate different usability features of the website as well as how helpful they felt it was in teaching them ASL. I also made a step by step guide for users to follow when we conduct the testing, which we will use to see how intuitive it is for users to complete the steps and to make sure they test various different actions (i.e. making sure each user tries to correctly and incorrectly sign on the platform to see the results). Finally, as a TA for the ASL StuCo this semester, I reached out to students who are either semi-experienced or experienced in ASL to conduct our tests. We will also be reaching out to a few people who are brand new to ASL in order to get a beginner’s perspective on our platform.

As for the models, I have been trying different combinations of training epochs and prediction threshold values (where the model only outputs a prediction if it is over a certain number i.e. 90%) to determine the best weights for the model to make it more accurate. In these tests, I have been able to identify certain signs that consistently have trouble over the combinations as well as some environmental factors like contrasting backgrounds that can influence the results. Because of this work and feedback during our weekly meeting, I will continue trying these combinations in a more intentional way, where I will record data related to the accuracy of the models based on epochs and/or threshold values in order to graph the tradeoffs associated with our system. The final accuracy data collection and graphs themselves will be recorded at the end of next week in order to account for any training shifts we make this week based on some of the identified signs with consistently inaccurate predictions.

Our project is on schedule at this point, however our model accuracy is not quite at the 97% we set out for it to be at the beginning of the semester. Since we planned to be adjusting and tuning the model up to the very end of the semester, this is not too big of a deal but we are going to start shifting focus primarily to testing as well as the final presentation/demo/report. Thus, while we are on schedule, our final implementation may not be as robust as we had planned for it to be.

Next week, I will be conducting user tests along with my teammates, focusing on factors such as hand dominance, hand size, lighting, distance from camera, and potentially contrasting backgrounds.I will also be examining the dynamic models more in depth to identify signs that are having less successful detections. Additionally, I will be recording information on accuracy vs threshold value and accuracy vs epochs used in training, then using that information to make tradeoff curves that we can hopefully include in our final presentation.