This week we all got together to build the structure on top of the iRobot, forming a box on top of the iRobot including the intake. Since our battery pack for the Jetson came in, I figured out how to ssh into the Jetson and test without the Jetson needing to be connected to the monitor inside the lab. Serena and I tested the maximum distance that inference could classify bottles in another area with less clutter and further confirmed that the detection distance needed to be improved, as the robot could not see further than .4 meters. 

I researched methods to improve small (large distance) object detection and decided that tiling would be the best way to proceed. This means chopping the original image into smaller, overlapping images, performing inference on those smaller images, and reconstructing the smaller images back to the original image. When looking up open source tiling libraries, I found SAHI: Slicing Aided Hyper Inference (https://github.com/obss/sahi) which is compatible with Yolov5. I experimented with SAHI and was able to get the inference to classify bottles 2 meters away, however, at the cost of a high inference time of 20 seconds per frame. I will work on lowering the time needed for inference next week, but I believe that tiling will work if I decrease the resolution of the original image to increase speed.

I believe that I am behind schedule and will work to speed up my progress. Next week, I hope to get my inference to see 2 meters at a decent speed and begin to work on other software algorithms.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *