Gaurav’s Status Report for 11/16/2024

This week, I was able to get object detection working on the Raspberry Pi 5 as well as transmission of the coordinates to an Arduino. I also worked with Bhavik to get object tracking working and to get radio transmission between an Arduino attached to the Raspberry Pi 5 and an Arduino attached to a computer to simulate our ground station environment. This week much of my time was spent modifying the run script to only transmit detection of humans and the human with the highest confidence. In the picture below, we switched it to a bottle just for that test to make sure that the drone controls were working with the position of the bottle. But as you can see behind the detection of the bottle, the Raspberry Pi is also deciding where the drone should move based on the position of the bottle and is transmitting that to the Arduino. We know this is working because the output that is being seen in the image is actually the Arduino transmitting what it is receiving from the Raspberry Pi.

When we (me and Bhavik) first started working this week, we were trying to compile the balloon detection model we had to work with the Hailo accelerator. However, we noticed that in order to compile the model for the Hailo accelerator, we had to compile the onnx file we had into an hef format. We spent a couple days trying to get that to work on an ARM machine by spawning AWS instances of X86 machines to compile the model. However, when we finally compiled it, we noticed that the model wasn’t working with the script they gave. We finally realized that we would have to write our own TAPPAS app, which is a framework provided by Hailo to optimize inference for the AI accelerator.  We then did some additional searching and found a pre-trained, pre-optimized model that would work for our purposes. All we had to do was modify the script to get only information about people, which in and of itself was quite an obstacle. However we were able to achieve 30 FPS on a 640×640 image.

I also worked with Ankit and Bhavik on training the PID controller in the drone cage in Scaife. Much of this work was headed by Ankit as we debugged different parts of the drone that were not working. First we did training for the P portion, and then we moved on to D. However, D exposed problems with the setup, the outdated hardware we were using, as well as parts shaking loose as we tested the functioning of the drone. This is the biggest bottleneck currently.

In order to verify my portion of the design we have run a variety of tests and will run more once it is mounted on the drone. For now, since we have set up the information pipeline from the camera feed straight to the radio, we are checking that the coordinates that the radio is receiving are representative of where the person is in the frame. We also have access to the information at each stage of the communication interface (stdout of the program in the RPi, serial monitor on the sending Arduino, and serial monitor in the receiving raspberry pi) and checking that the information is consistent across all those points. To run more tests, we will test with the object in various other positions and make sure that it is detecting in those scenarios too. We also will test by having multiple objects and no objects in the frame and checking that the arbiter is working properly as well.

object detection model detecting bottle
same model detecting a human

Leave a Reply

Your email address will not be published. Required fields are marked *