This week we all worked on the design review presentation and I presented in the design review. With the introduction of obstacles, I modified the dataset to have all non-plastic bottle items be in the “obstacle” class. I trained the model on the new dataset annotations and got good results. The confusion matrix below shows that the model classifies bottles with 99% accuracy and obstacles with 97% accuracy.

I tried changing hyperparameters such as using a more complex Yolo model (Yolov5m) and more epochs  to see if accuracy would increase, but results stayed the same. I concluded that since results on the test dataset were satisfactory, I would stop working on the model and integrate it into the Jetson to see how well the model can perform real-time inference on our surroundings. If real-time inference does not perform well, I believe I need to append a wider variety of images to the training dataset.

I met up with Serena to integrate my trained model on the Jetson. We worked together to write a simple script that would capture image files from the Realsense every couple seconds and run inference on the image so that we can see how well it performs in the real-world environment. We didn’t get to test much, but we concluded that real-time inference works and will do more thorough testing to see if the model needs to be retrained. 

Overall, I am on schedule and next week I hope to test real-time inference more to see if I need to append to our dataset. I will also be working with Serena to integrate my model inference with her depth perception and object tracking algorithms.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *