This week Serena and I worked out the high level software diagram for our system. We thought about the sequence of calculations, from using the machine learning model for inference to calculating the distance between the robot and target with the lidar camera to sending angle and distance commands. We also thought about the logical flow for if the robot does not detect a bottle. 

I researched open source datasets to use for model training and validation. I found a suitable dataset of approximately 4000 images containing aluminum cans, glass bottles, plastic bottles, and milk cartons of various sizes, shapes, and orientations. The dataset already contains the annotations needed for Yolov5 training and inference. I made a parser to take out the annotations of all objects but plastic bottles, as we will only have the one plastic bottle class. Additionally, the parser divides the dataset into 80% training and 20% validation, where 50% of images in training and 50% of images in validation have a plastic bottle.

I believe I am on schedule and next week I will start the training of this initial dataset with Yolov5 transfer learning. I will aim to tune parameters or append to the dataset to achieve over 90% validation accuracy.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *