Valeria’s Status Report for 4/30/22

This week I worked on practicing for our final presentation. Apart from that, I worked with Hinna and Aishwarya in finishing up our poster and in doing our demo video. I have also been working on creating a profile page for the user so the user may be able to see what their best and worse signs are.

My progress is on schedule. For next week, my goal is to finish up my audio recordings for the demo video in explaining the purpose of our project. Another goal for next week is to finish up editing and adding information to the sections I was assigned for the final report.

Valeria’s Status Report for 4/23/22

This week I was able to accomplish getting the test module done. Users are now able to pick the desired topics that they want to be tested on and they would be given 10 random questions to be tested on. I was able to work with Aishwarya to deal with the integration of how to send and store if the sign was correct or not into our test module. Apart from that, I finished creating the results page and am now able to show to the users the results of their test in a clear and intuitive way.  I also helped do a bit more training data for some of our machine learning models that we are having accuracy trouble with and helped do the final presentation slides.

I am currently on schedule. For next week, I am giving the final presentation so I am going to spend some time this next week practicing for it. After that, my next goal is to work on the final poster and the final report to get that finished up before finals week starts.

Valeria’s Status Report for 4/16/22

This week I worked with Aishwarya to get the integration between the machine learning model and the web application done and have the results be shown on screen for the user. We were able to accomplish this and have now moved forward into perfecting how to show these results to the user. Apart from that, I have also been working on creating a test module and allowing the users to pick what kind of signs they would want to be tested on. As of now, I have the model and the HTML template done. I’m currently trying to figure out how to send the data from the HTML to the views.py through a POST request. I have also added the rest of the tips for the signs so that can now be viewed as well.

My progress is on schedule. For next week, I hope to finish the test module and figure out how to send that POST request data. Apart from that, my other goal is to get the slides done for the final presentation and to help out Hinna and Aishwarya with whatever else we need to finish up before the presentation.

Valeria’s Status Report for 4/10/22

This week I worked on showing the tips on how to make each sign and have the tips be changed every 5 seconds. I also worked on making the web application more cohesive by having the course and module boxes be of the same color and font. Here is a video showing how the current web application looks. Apart from that, I did 15 videos for each dynamic sign to help build upon our training data. I also started looking into how to create an AJAX model to save the results of each test that a user takes.

Currently, my progress is on schedule in regards to the web application. However, I did run across a problem earlier this week when trying to test the model accuracy. In order to test the model accuracy, I need to run the program locally. The problem is that tensorflow does not agree with the Mac M1 chip. I looked in StackOverflow and other websites for possible solutions that I could use to try to fix this and spent the majority of my week focusing on this. Unfortunately, these possible solutions were not working for me and my progress on testing the model accuracy is behind. In order to fix this, the team and I decided for me to use their computers to test the model accuracy when we are meeting in person so that way I can be involved in the process.

For next week, I have three main goals that I want to accomplish. I want to finish creating the AJAX model for testing and figure out a way to send a random assortment of questions when a user wants to test on one topic e.g. learning, alphabet, etc. The second goal I want to accomplish is to change the video capture from being Blob to be OpenCV. The third goal is to add in the rest of the tips for the signs e.g. conversation, learning, and numbers.

Valeria’s Status Report for 4/2/22

This week I created and inserted the instructional videos for the signs. There was a minor setback when doing this since we found that Chrome does not accept .MOV files. Therefore, I also had to convert all of the videos from .MOV to .MP4 so that the videos would show up on Chrome. Apart from that, I am also saving the user’s hand dominance after they register and log in. Originally, I thought I could save the user’s hand dominance by getting MediaPipe data. However, after discussing it further with the rest of the team, it was concluded that having the user explicitly state their hand dominance the first time they visit the website would be easier. I wasn’t able to do much else this week since I had a midterm exam for another class and I also contracted COVID.

Because I caught COVID, my progress is slightly behind what I anticipated. Originally, I planned for this week to figure out how to notify the users if they correctly/incorrectly did the sign, help with integration, and help with testing the model accuracy. Therefore, I am deciding to put user UI less of a priority for now. For next week, my main priority is to test the model accuracy during the weekend and continue helping with integration. If I’m able to catch up next week, another thing I hope to do is to add all of the tips that Hinna made for the users when they are trying to make a sign.

Valeria’s Status Report for 3/26/22

This week I was able to make the website automatically stop recording the video once 5 seconds have occurred. I connected all the pages together and can now move from page to page cohesively. Here is the link to watch a walkthrough of our website. This week I also did 10 videos for each of the dynamic signs e.g. conversation and learning categories. Furthermore, I did research into how we can send the Blob object that we are creating for the video and send it to our machine learning model to help with our integration stage.

From the research that I found, there is the possibility of sending the Blob itself to the machine learning model and having it be created into an object URL inside the model. Another idea that we found was to automatically store the video locally and have the machine learning model access it locally. While this would work, it would also not be efficient enough for what we want to accomplish. However, we realized that with time constraints this might be our fallback plan.

As of right now, my progress is on schedule. For next week, I hope to get the integration between the machine learning model and the website to be working. I also hope to create another HTML template with its associated AJAX actions to calibrate the user’s hands for MediaPipe and also get the user’s preference in hand dominance. Apart from that, I want to get the instructional videos done for the alphabet page.

Valeria’s Status Report for 3/19/22

This week I was able to finish all of the HTML templates for the web application. Currently, I only have a couple of the URLs working to be able to move around the pages and be able to check the templates, meaning that only the alphabet page and the letter A’s learn/test mode are linked with URLs. Furthermore, I have linked real-time video feedback into the web page and have the user download whatever video clip they record of themselves. The website is able to get the video once the user presses the “Start Recording” button. Once the user finishes doing the sign, currently they need to press the “Stop Recording” button for this video to be saved. Here is the link to a pdf showing the HTML templates that we currently have. Apart from that, this week I have also been helping a little bit with the machine learning models by helping Aishwarya test out the models and trying to figure out where it was going wrong.  As for my current testing database, I have added 10 more images for each of the signs that I have been in charge of for the past few weeks, leaving a total of 30 images for the signs N to Z and 5 to 9.

Currently, my progress is on schedule since I was able to catch up during spring break. My goal for the next week is to link the rest of the remaining pages e.g. numbers, conversation, and learning. I also hope to be able to have the program automatically stop recording after 5 seconds of the “Start Recording” button being pressed. Apart from that, I also hope to add 10 images for each of the new signs that I have been assigned, e.g. all of the conversational and learning dynamic signs.

Valeria’s Status Report for 2/26/22

This week I focused more on the web application since we are running slightly behind on schedule for this. I decided to look into the differences between Material UI and Bootstrap to decide on one of these front-end frameworks to use for the project. I ended up choosing Bootstrap because it’s the one that we, as a team, have more experience with and the components are easy to make. Because of this, I started working on our HTML templates for our web application. I was only able to complete how the home page and the course page are going to look and here is an image for reference. Most of the data in the HTML templates is dummy data that I am planning on replacing during the week of spring break to show our actual lesson plans. Apart from this, I have also been working on our design report. As a team, we chose to split up the report into parts and assign them to each other. So this week I have been working on the design requirements and the system implementation for the web application. Lastly, I expanded our testing dataset for letters N-Z and numbers 5-9 by adding 15 images for each sign. I decided to take these pictures with varying lighting scenarios so that we can see whether our neural network still predicts the correct labels.

As of now, my progress is slightly behind schedule. I was hoping to have all of the templates ready by this week so when we came back from spring break we have something to start with. However, I was only able to get one template done. Since next week I have four midterms that I need to study for, I am not going to have that much time to get all of the templates done in time. Because of this, I am going to continue working on the HTML templates during the week of spring break. For next week, I hope to get one HTML template done, preferably the training template e.g. the page that shows our instructional video and real-time video feedback, and finish our design report since that is a major part of our grade.

Valeria’s Status Report for 2/19/22

This week I worked on some Figma frames to help represent how our web app is going to look and have a clear idea of what we want to put in the HTML templates. I also created another GitHub repository to put all of our images and created the folders for each sign. Since we needed to get started this week in building our testing database, I started taking pictures of signs for letters N to Z and numbers 5 to 9.  I took 5 pictures for each sign and you can see a sample of what the pictures looked like here.  The main idea was to have different angles for the photo to build the neural network in recognizing a sign at any angle. Apart from that, I looked into the possibility of building a neural network inside an EC2 instance since we found through research that building this network with our computers can potentially make them crash. I did find that it is possible but that we might need a GPU instance, which is something to consider. Lastly, I’ve spent the majority of this week, along with Aishwarya and Hinna, working on the design presentation.

Currently, our progress is mostly on schedule. However, we are currently slightly behind on the web app since we have taken priority to machine learning this week. Because we are currently behind on the web app, I’m planning on working on it this next week and not focusing as much on machine learning so that we do have the HTML templates set up. Luckily the Figma frames do help immensely on what elements to add to the pages so it shouldn’t take me more than a week to finish this up. For next week, I hope to accomplish finishing up the HTML templates and have all the pages set up with minimum actions like moving from the home page to a course module, etc. I also hope to continue building the testing database for my assigned signs (N-Z, 5-9) with at least 10 pictures per sign.  Apart from that, I would also be helping finish up our design paper.

Valeria’s Status Report for 2/12/22

This week I made the base for our web application. I set up a Github repository for the team to access our application where all the Django files are located, as well as a separate folder for our HTML templates.  I did research on what is the best camera to use for our computer vision and which one would help with our machine learning model. From what I found, the best one that is within our budget is the Logitech C920 camera since it has 30 fps. The 30 fps is going to help us when we are creating our neural network for our moving signs in our platform. Apart from that, I have also been researching neural networks, like the rest of our team, and trying to decide which one would be best for our project. From what I am currently finding, it seems that using two different neural networks, one for moving and one for static signs, can help us in the long run.

Our project is currently on schedule. For next week, I hope to finish my research on neural networks and finalize our design for the machine learning part of our project. Furthermore, I hope to get the HTML templates set, with a very basic layout of what the app is going to look like, and also start listing the functionality that we might need from AJAX. Apart from that, I would also be helping finish up our design presentation and paper so I do want to finish the design presentation slides by next week.