This week, we worked as a team to get the robot structure built. Meghana and I placed additional orders for and picked up the rest of the wood from HomeDepot and hence we were able to begin constructing the frame on the robot. We cut the anchor part that attaches to the top of the iRobot and the two arms that will hold the intake mechanism and were able to screw the outer structure onto the iRobot and test that it would not be too heavy for the iRobot to move. In addition, we mounted the intake mechanism to the ends of the arms and tested the motor attached to the intake to see if the mounted system would spin. We found success here but observe that the motor’s maximum RPM is not enough for our needs. Furthermore, Mae and I took the robot out to Techspark to test on different areas with higher contrast to the bottles to try to benchmark our robot’s ability to detect bottles. Since Mae was able to figure out how to ssh into the Jetson, development is a lot more mobile, but we also discovered that the battery pack I ordered is not able to provide consistent power to the Jetson and hence causes it to shut down frequently. We will research further into why the battery malfunctions. During testing we found that the maximum detection distance is around 0.5 meters so we will have to figure out how to double that visibility to 1 meter. 

 

The motor’s lacking RPM is an issue I discovered earlier this week when testing the motor by just driving it with a 12V battery, since it has a max RPM of only 105. As a result, I placed another order for a stronger motor and will see if a max RPM of 900 will do the job, which I am hoping it will. On the software side of things, I was able to incorporate inference into the object tracking script by sampling a frame every 10 seconds and running inference on that saved frame. While this will pause the video stream I believe it will be useful to run inference periodically so that the object that is being tracked will still be valid and relevant, and the algorithm will not so easily lose track of an object. I also worked on a basic script to check when a tracked object’s center point is within a small margin of the center of the frame vertically, which is how I intend on implementing the angle calculations for robot navigation.

 

Next week I intend to test out the efficacy of running inference while object tracking and will work on running inference once before an object is selected to be tracked, selecting a bottle from the yolov5 detected bounding boxes, and then using that bounding box to track since we are still manually selecting an object to track at the start of the script as of now. I believe that the software side of things still needs a little more acceleration since through testing Mae and I have discovered that the range of vision of the robot is not enough, and are thinking through ways to improve our training dataset or to change up the robot procedure so that it searches for any object within a 1m distance of it and drives up closer to a bottle until the bottle detection algorithm can detect it.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *