Team Status Report for 12/07/2024

The most significant risks that could jeopardize the success of the project are the PID controller not working and various filtering issues. Currently we are running into issues with the PID not able to get the drone to hover exactly level, instead it is drifting off to the side. We are managing these risks by working on the drone itself as diligently as possible to showcase the controller working to some degree.  We have enough to showcase at the final show, but our contingency plan is to make the drone manually controlled.

The major change that we have made to this system is using a manually controlled drone to showcase the PID controller. This change was necessary as we cannot immediately get the lawnmower operation working even though we have the algorithm ready. Our new schedule is having everything done by Wednesday.

Unit tests for CV:
* Yolov8n is able to get camera feed
* Yolov8n is able to detect any object (can be wrong)
* Yolov8n is able to detect any object correctly
* Yolov8n is able to detect object we want to detect
* Yolov8n is able to detect object image coordinates correctly

Unit Tests for Tracking:
* Tracking is able to receive image input
* Tracking is able to receive image coordinate for bbox
* Tracking is able to parse the inputs into ints
* Tracking is able to ouptut the distance vectors from the object detect
* Tracking is able to compute the direction

Unit Test for Radio:
* Radio is able to receive bytes
* Radio is able to send bytes
* Radio range testing

Unit Tests for Drone IMU:
* Drone IMU calibaration
* Drone IMU Kalaman filter outputs correct values
* Drone IMU Kalaman filter with IMU reading cause no drift
* Drone IMU maintains correct readins after vigorus movements
* Drone controlls operating at 200+Hz

Unit Tests for Drone PID:
* Drone PID controller is able to get set point error
* Drone PID controller is able to utilize kalman filter data to calculate D
* Drone PID controller is able to calculate error required for I value
* Drone PID controller shows the drone trying to compensent with only P value
* Drone PID controller shows drone’s leveling gets dampned with addition of D value
* Drone PID controller shows I value is able to take care of steady state error

Raspberry Pi enclosure
Drone midair
Drone Hover Test

Gaurav’s Status Report for 12/07/2024

This week, I worked on creating an enclosure for the raspberry pi to place it on the drone. I also helped Ankit and Bhavik train the PID controller for the drone to get it hovering for our video. My part of the project is more or less done now, so I would say that my progress is on schedule. By next week the project will have been due so completion.

Team Status Report 11/30/2024

The most significant risks that could jeopardize the success of the project is getting the PID controller working. We recently had a major breakthrough in the calibration process and we have two axes currently working with our setup. We hope to have a hovering drone soon. Although this is behind schedule, we have the path planning code and the tracking code ready to use as soon as the hovering drone part is working. We are managing the risks by having absolutely everything else ready.

No major changes were made to the design of the system.

 

Gaurav’s Status Report for 11/30/2024

This week, I worked on creating the enclosure for the Raspberry Pi 5 so that we can drill it onto the drone to test the entire system. Currently I found an STL file of the case online, so I will make some modifications to that to make a way to drill it onto the drone.

Our progress is slightly behind because the PID training took longer than expected. However, the rest of the parts are more or less ready to go as soon as we finish that. My personal progress is on schedule.

By next week, I hope to have the raspberry pi mounted on the drone and all the CV working.

 

As I was implementing our project, I found it necessary to learn how a PID controller works to better help my teammates get their parts working. I also had to learn how an AI model works to better understand the documentation behind the Hailo App and try to figure out if it is necessary to design our own Hailo application for our data set. In order to learn these tools, I watched a lot of videos and took light notes. This helped me stay concentrated on the videos themselves as they explained how the drone controller works. For the Hailo application, I simply just sat and read the documentation.

Team Status Report 11/16/2024

Currently, the biggest risk that could jeopardize the success of the project is the PID controller. All the AI inference and electrical hardware is able to accomplish the desired function but all that is remaining is to mount it on the drone. The way we are managing these risks are by focusing solely on the controller while also investigating back ups that may be easier. Gaurav, Bhavik and Ankit are all going to only be working on the PID controller from now on to try to get it ready in time for final demo. Our contingency plan is to use the Pixhawk flight controller which comes with all the necessary hardware on it. This should theoretically be easier to calibrate to get the drone to fly stable.

No changes were made to the hardware design as of now. If we ever have to use our contingency plan, we will have to change the requirements to incorporate the Pixhawk. For software design, we did make a change to the model we were using. When trying to compile the balloon detection model for the Hailo-8L accelerator, we were having trouble parsing through all the documentation. However, there was a pre-trained, pre-optimized model already available in the provided model zoo, which we decided to use instead. We are modifying the script used to run the inference to only detect humans and only transmit the human with the highest inference confidence.

 

same model detecting a human
object detection model detecting bottle
testing the drone motors

Gaurav’s Status Report for 11/16/2024

This week, I was able to get object detection working on the Raspberry Pi 5 as well as transmission of the coordinates to an Arduino. I also worked with Bhavik to get object tracking working and to get radio transmission between an Arduino attached to the Raspberry Pi 5 and an Arduino attached to a computer to simulate our ground station environment. This week much of my time was spent modifying the run script to only transmit detection of humans and the human with the highest confidence. In the picture below, we switched it to a bottle just for that test to make sure that the drone controls were working with the position of the bottle. But as you can see behind the detection of the bottle, the Raspberry Pi is also deciding where the drone should move based on the position of the bottle and is transmitting that to the Arduino. We know this is working because the output that is being seen in the image is actually the Arduino transmitting what it is receiving from the Raspberry Pi.

When we (me and Bhavik) first started working this week, we were trying to compile the balloon detection model we had to work with the Hailo accelerator. However, we noticed that in order to compile the model for the Hailo accelerator, we had to compile the onnx file we had into an hef format. We spent a couple days trying to get that to work on an ARM machine by spawning AWS instances of X86 machines to compile the model. However, when we finally compiled it, we noticed that the model wasn’t working with the script they gave. We finally realized that we would have to write our own TAPPAS app, which is a framework provided by Hailo to optimize inference for the AI accelerator.  We then did some additional searching and found a pre-trained, pre-optimized model that would work for our purposes. All we had to do was modify the script to get only information about people, which in and of itself was quite an obstacle. However we were able to achieve 30 FPS on a 640×640 image.

I also worked with Ankit and Bhavik on training the PID controller in the drone cage in Scaife. Much of this work was headed by Ankit as we debugged different parts of the drone that were not working. First we did training for the P portion, and then we moved on to D. However, D exposed problems with the setup, the outdated hardware we were using, as well as parts shaking loose as we tested the functioning of the drone. This is the biggest bottleneck currently.

In order to verify my portion of the design we have run a variety of tests and will run more once it is mounted on the drone. For now, since we have set up the information pipeline from the camera feed straight to the radio, we are checking that the coordinates that the radio is receiving are representative of where the person is in the frame. We also have access to the information at each stage of the communication interface (stdout of the program in the RPi, serial monitor on the sending Arduino, and serial monitor in the receiving raspberry pi) and checking that the information is consistent across all those points. To run more tests, we will test with the object in various other positions and make sure that it is detecting in those scenarios too. We also will test by having multiple objects and no objects in the frame and checking that the arbiter is working properly as well.

object detection model detecting bottle
same model detecting a human

Gaurav’s Status Report for 11/09/2024

This week, I was not able to accomplish as much as I would like to because we ran into many issues trying to recompile the model to work with the Hailo accelerator. I was not able to recompile it on my laptop, so we had to try to compile it on Ankit’s laptops but that came with its own set of errors. Currently we are stuck on the recompile step, but once that is ready everything else should work.

Our progress is on schedule. We have talked to the professors about what we will have for our interim demo, and we should definitely have that ready in time. Our drone is very much on track and having the vision component working soon is good for our overall progress.

By next week, I hope to have the model recompiled for the Hailo accelerator and a chassis designed to put the Raspberry Pi on the drone. I also hope to have an idea of how to convert the drone battery power into a current/voltage that the raspberry pi can accept.

Team Status Report for 11/09/2024

The most significant risks that could impact the success of the project are not being able to tune the PID in time for the interim demo. Currently we have the drone motors running, and partially tuned. However we still have to test the drone in the air and on the harness.

No major changes were made to the existing block diagram. We are all working on our individual parts and collaborating when necessary.

There is no updated schedule, everything is on track to finish in time.

Team Status Report for 11/02/2024

The most significant risks that could jeapordize the success of the project are getting the integration to work and the battery pack. These risks are being managed by finishing the other components earlier. We are about to start the integration step already, and we will be testing the efficacy of the battery pack soon once we get the drone motors installed.

No additional changes were made to the system design.

Current drone status wiring

Video of drone turning on

 

Gaurav’s Status Report for 11/02/2024

This week, I ordered all the parts needed to start with the Raspberry Pi transition as well as flashed linux onto the Raspberry Pi. Much of my time was spent reading about how to setup the AI model onto the Raspberry Pi and any peripheral hardware we would need to access the GPIO on the Raspberry Pi to communicate with the Arduino.

Our progress is on schedule. Bhavik and Ankit have done a lot of work getting the drone running and setting up the model onto the Raspberry Pi should be extremely easy. Our next major step is integration and testing.

By next week, I hope to have the model running and figure out how to board the Rpi onto the drone.

Linux running on RPi