This week I used openCV’s object multitracking with our Realsense livestream to verify that it works. Then, I integrated openCV’s multitracking and distance readings into our pipeline. Our pipeline now consists of inference, then angle calculation to center the robot to the bottle, inference again to capture objects in the new field of view, and the multitracking algorithm. With each of the bounding boxes from multitracking for each frame, we take the distance between the robot and the center of the bounding box. Currently, I only check the target’s bounding box. If the distance between the robot and the target is below a threshold, the robot will move forward, otherwise, the robot will stop. I tested this logic and found that it seems to work, but I need additional testing to verify it. Because inference takes so long and I am physically holding the camera to the robot, it is hard to tell whether or not the system is performing properly. If we cannot cut inference time down, I am considering using the Realsense for object detection instead of running inference a second time to improve the pipeline speed.

I believe I am on schedule and next week I am really hoping to get cuda working and will expand on the logic of the software pipeline to consider obstacles.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *