Aditi’s Status Update 12/3

On my last update I was having issues running the Django webserver, since then I have been able to debug the issue and have been successful in running the webserver and taking images and running inference on them. That being said, due to implementation choices, the entire pipeline is not working yet. The CV and Web modules are in the process of being reworked to work independently of each other so that this will not be an issue. This week I worked on measuring some of the metrics for the Jetson like YOLOv5 inference and time for image capture.. I have also been playing around with various parameters on the Nano to see if it affects  inference time and image capture. This includes running inference using the cpu vs gpu, and capturing images of different dimensions. Interestingly, using larger dimensions did not change the inference time. I am still in the process of debugging, but I am trying to see if doing inference using the NVIDIA DeepStream will speed up inference. I am also planning to look into differences in inference time between batch processing and single image inference.  I also spent time planning what tasks we needed to get done before the presentation and report and assigning each team member with a task. In the week prior to Thanksgiving we had spent a lot of time restructuring our implementation to have a more modular code structure and a more cohesive story which as a team we spent many hours on.

Leave a Reply

Your email address will not be published. Required fields are marked *