In my attempt to get inference to perform faster, I realized that I am not actually using cuda because torch.cuda.is_available() returns false. I was unable to install a version of Pytorch on the Xavier through the terminal. After talking to Tamal, I realized that getting cuda on the Jetson Xavier requires the SDK manager which we do not have access to, so I am considering switching to the Nano. We have one now, but don’t have the cable to flash the SD card- I will try to flash the Nano after getting the cable.

Serena and I tested the lidar distances with the inference model, taking the distance to the center of the bounding boxes. I confirmed that we were measuring the correct point by using OpenCV to draw circles where we were taking distances on the image and visually checking they were the centers of bounding boxes from inference. 

Serena and I are currently working to incorporate the centering algorithm that she worked on to allow the robot to center itself to a bottle target into the entire pipeline. We will be testing on the robot and altering rotation speed based on how many pixels away the target is from the center of view. Overall, I believe we are slightly behind schedule on the software. I hope to get cuda working and work with Serena to integrate multi-object tracking next week.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *