This week I was able to make the website automatically stop recording the video once 5 seconds have occurred. I connected all the pages together and can now move from page to page cohesively. Here is the link to watch a walkthrough of our website. This week I also did 10 videos for each of the dynamic signs e.g. conversation and learning categories. Furthermore, I did research into how we can send the Blob object that we are creating for the video and send it to our machine learning model to help with our integration stage.
From the research that I found, there is the possibility of sending the Blob itself to the machine learning model and having it be created into an object URL inside the model. Another idea that we found was to automatically store the video locally and have the machine learning model access it locally. While this would work, it would also not be efficient enough for what we want to accomplish. However, we realized that with time constraints this might be our fallback plan.
As of right now, my progress is on schedule. For next week, I hope to get the integration between the machine learning model and the website to be working. I also hope to create another HTML template with its associated AJAX actions to calibrate the user’s hands for MediaPipe and also get the user’s preference in hand dominance. Apart from that, I want to get the instructional videos done for the alphabet page.