The goal of this past week has been obviously to work on our product to showcase at the interim demo. I’ve been working on polishing my parts to make sure that the CV algorithm is able to process videos and output the bounding boxes of the number of people in the video and that the wait time algorithm manages to produce reasonable results. I’ve also been working on putting the code on Replit since we’ll be using Replit to run the code for our integrated system. (I have been putting my code on Replit but I’ve been having a little trouble with it as Replit doesn’t support cv2 which is confusing because cv2 is one of the packages it provide so I’ll use Google Colab for the interim demo. Another concern of mine is that I am not sure how my code could be run on the Jetson Nano and the reason why this is a concern as of now is that our Jetson Nano arrived late and we had to replace it with another board because the original board experienced technical issues. Therefore, my guess is that there will be more adjustments to my code once we start integrating it onto the Jetson Nano board. As of now, I can add NVIDIA Tesla K80 GPU to my Repl and run the code on there to monitor the performance of our program.
Another update on integrating my code into the Jetson Nano: our team member managed to resolve issues on the hardware end and managed to clone the code from my Git repo. We should make even more progress after the interim demo as we’ve figured out how to put these 2 components together.
I’ve been running the code on images and videos and results after running it on Google Colab are available on Github and on Google Drive here specifically and here if you are curious about the whole code and results from extra runs.
I believe my individual progress is on schedule. I should be able to showcase the MVP of the software algorithms during the interim demo on Monday (04/03/2023). After the interim demo, I think I’ll keep working on optimizing the performance of the program in combination with the Jetson Nano.