This week we focused on finalising the object detection algorithm and porting it into ROS for integration purposes. We originally ran into an issue where the network we were using was too big and consuming too much power and memory on the Jetson causing the frame rate to be too low to be useful. Upon further research, we realised that neural networks on the Jetson are supposed to be developed using TensorRT rather than vanilla Tensorflow. Primed with this information, we went back to the Jetson development page to look at examples of neural networks using TensorRT, and converted the object detection module to use Jetson documentation and TensorRTs. From here, we then ran it on the Jetson Nano and mounted the camera on the car to see if the object detection algorithm was working. We found that using gatorade bottles worked consistently, as in the algorithm could routinely detect and plan around the gatorade bottle. I also worked on converting the object detection algorithm into ROS to be able to integrate and communicate with the other nodes.
In the next few days, we will begin real time testing on the autonomous navigation capabilities of the car, primarily its ability to navigate around obstacles, and we will also begin testing the following capabilities of the following car. The following car will rely on a map that the lead car sends to trace the lead car’s path through the course.