Ankit’s Status Report for 11/30/2024

The Thanksgiving Break for me was quite productive. I brought home all of our stuff in order to be able to work on teh project and I made quite a bit of progress. I made two major changes to the software pipeline: 1) the complementary filter implementation we were using originally suffers from drift in the presence of high frequency noise in the gyroscope (which will happen on a drone) and so I properly implemented a kalman filter based approach that results in next to no drift for the IMU and 2) I realized that our original formulation of the PID which was to minimize the error in the angular position of the drone is very unresponsive and not how it is done on professional drones. Instead, these drones implement a rate controller which minimizes the error in angular rate, which is what I changed my PID implementaiton to. With this, our drone is now very stable in both roll and pitch and we are close to trying a hover test after we integrate the radio comms to allow for autonomous takeoff and landing.

We still have about two weeks to demo day and while this is definitely crunch time, I think the immense progress made over the break will help us finish this project.

Bhaviks Status Report for 11/30/2024

This week was mostly spent planning and ordering the remaining components for the project (GPS, planning power distribution, and Rpi attachment). I also cleaned up the code from the demo and prepared it for the final demo. I have also spent some time making sure that the radio communication can work bi-directionally. Moving on to the next parts of testing, we want to be able to send signals from the base station to the drone, hence we need to make sure this step works. Next week we will continue to integrate our system and test our drone to start flying.

Additonally, a majority of the time was spent creating the final presentation slides. I created a full outline of the presentation. I will continue to work on the slides by rerunning some demo scripts to capture more media for the presentation. Since I will be the one presenting this time, I have also begun to practice and get the required information.

Throughout this project, there were a lot of new tools and knowledge I had to learn in order to accomplish my tasks. Working a lot of various different hardware systems, I had to learn a lot about efficient methods to communicate between systems, the various communication protocols, and how to interface with third-party tools. I also had to learn a lot about controls (PID specifically). Some learning strategies were watching tutorials on related topics, looking at other project documentation, or talking with friends who were knowledgeable about the specific topic. I also learned a lot by trial and error. For example, using the knowledge I gained by reading documents and watching videos, I was able to implement basic functionality, learn from the mistakes made there, and retry. This has helped me take the theoretical knowledge and apply it in a fast manner.

Bhaviks Status Report for 11/16/2024

This week I focused my efforts on finalizing the CV and helping with drone controls.

At the beginning of the week I worked closely with Gaurav to get a model working on the rpi. We got the model working but the frames were really low as it wasn’t being accelerated. Thus, we spent time looking through the documentation to figure out how to compile the model. In our attempts we compiled the model multiple times in various configurations but have run in various PC related issues (ex. incompatiblae architecture or not enough hardware specs such as ram). We resolved this problem by configuring an AWS EC2 instance and setting up an additional memory. Once we did this we were able to successfully compile the model after lots of efforts.

Once we put it on the rpi, the script wasn’t able to utilize the model and detect anything. We believe this is a compiler issue with the hailo software as its very new and doesn’t fully support using custom models very easily. Therefore, we have tested this theory by compile a base yolov8n model and uploading it to the rpi. This seems to have worked. Thus, we have decided to move forward with objects trained on the base yolo. Another reason for the change is that we realized the drone propellers produce a lot of windforce and would easily cause the balloon to move around a lot, causing the detection and tracking algorihtims to fail.

On the drone pid controls aspect, I worked closely with Ankit to set up a drone testing rig. We have tried multiple rigs and finally decided to use a tripod attachment with the drone on top. This seems to have worked fairly well and we have been tuning with this. While tuning we found lots of hardware inconsistencies which seem to be like the major risk moving forward. The PID seems to be working well one day and the next day we seem to have gone backwards. We have spent multiple days trying to limit these hardware issues to fully ensure that the drone PID controls work well. There is still a lot of testing to do to make sure the drone is reliable.

In regards to testing, we will gather specific metrics required that related to our use case requirements. For example, for the computer vision model detection, we will be measuring that the computer vision model is able to accurately detect the object in the various frames. This will let us measure the accuracy and thus we will be able to analyze this metric to check if we meet the design requirements. For tracking we will check if the camera is able to provide directions on which way the drone should move. If the algorithm is able to make sure that  object is always centered we know we have met the design requriment.

Team Status Report 11/16/2024

Currently, the biggest risk that could jeopardize the success of the project is the PID controller. All the AI inference and electrical hardware is able to accomplish the desired function but all that is remaining is to mount it on the drone. The way we are managing these risks are by focusing solely on the controller while also investigating back ups that may be easier. Gaurav, Bhavik and Ankit are all going to only be working on the PID controller from now on to try to get it ready in time for final demo. Our contingency plan is to use the Pixhawk flight controller which comes with all the necessary hardware on it. This should theoretically be easier to calibrate to get the drone to fly stable.

No changes were made to the hardware design as of now. If we ever have to use our contingency plan, we will have to change the requirements to incorporate the Pixhawk. For software design, we did make a change to the model we were using. When trying to compile the balloon detection model for the Hailo-8L accelerator, we were having trouble parsing through all the documentation. However, there was a pre-trained, pre-optimized model already available in the provided model zoo, which we decided to use instead. We are modifying the script used to run the inference to only detect humans and only transmit the human with the highest inference confidence.

 

same model detecting a human
object detection model detecting bottle
testing the drone motors

Gaurav’s Status Report for 11/16/2024

This week, I was able to get object detection working on the Raspberry Pi 5 as well as transmission of the coordinates to an Arduino. I also worked with Bhavik to get object tracking working and to get radio transmission between an Arduino attached to the Raspberry Pi 5 and an Arduino attached to a computer to simulate our ground station environment. This week much of my time was spent modifying the run script to only transmit detection of humans and the human with the highest confidence. In the picture below, we switched it to a bottle just for that test to make sure that the drone controls were working with the position of the bottle. But as you can see behind the detection of the bottle, the Raspberry Pi is also deciding where the drone should move based on the position of the bottle and is transmitting that to the Arduino. We know this is working because the output that is being seen in the image is actually the Arduino transmitting what it is receiving from the Raspberry Pi.

When we (me and Bhavik) first started working this week, we were trying to compile the balloon detection model we had to work with the Hailo accelerator. However, we noticed that in order to compile the model for the Hailo accelerator, we had to compile the onnx file we had into an hef format. We spent a couple days trying to get that to work on an ARM machine by spawning AWS instances of X86 machines to compile the model. However, when we finally compiled it, we noticed that the model wasn’t working with the script they gave. We finally realized that we would have to write our own TAPPAS app, which is a framework provided by Hailo to optimize inference for the AI accelerator.  We then did some additional searching and found a pre-trained, pre-optimized model that would work for our purposes. All we had to do was modify the script to get only information about people, which in and of itself was quite an obstacle. However we were able to achieve 30 FPS on a 640×640 image.

I also worked with Ankit and Bhavik on training the PID controller in the drone cage in Scaife. Much of this work was headed by Ankit as we debugged different parts of the drone that were not working. First we did training for the P portion, and then we moved on to D. However, D exposed problems with the setup, the outdated hardware we were using, as well as parts shaking loose as we tested the functioning of the drone. This is the biggest bottleneck currently.

In order to verify my portion of the design we have run a variety of tests and will run more once it is mounted on the drone. For now, since we have set up the information pipeline from the camera feed straight to the radio, we are checking that the coordinates that the radio is receiving are representative of where the person is in the frame. We also have access to the information at each stage of the communication interface (stdout of the program in the RPi, serial monitor on the sending Arduino, and serial monitor in the receiving raspberry pi) and checking that the information is consistent across all those points. To run more tests, we will test with the object in various other positions and make sure that it is detecting in those scenarios too. We also will test by having multiple objects and no objects in the frame and checking that the arbiter is working properly as well.

object detection model detecting bottle
same model detecting a human

Bhaviks Status Report for 11/09/2024

This week, I spent a lot of time getting the drone ready for PID tuning and setting up the various components required to relay coordinates back to the base. I began the week by attaching all the motors to the drone, testing them to ensure they all worked, and setting up the polarity so the drone could properly take off.

However, as discovered last week, our propellers don’t fit properly onto the motors. Thus, we spent some time talking with techspark experts to figure out a method to drill into the propeller. After many trials and various techniques, we couldn’t figure out a proper method to drill into them. Therefore, we decided to change our motors instead so that the properllers are compatible. Once we changed the motors, we again had to perform testing to make sure they all worked and set their polarity correctly. We found out that the required voltage for these motors are lower that the ones we were previously using, thus, we realized the need to order a different battery. But to continue testing while the batteries were being ordered, we set software constraints to ensure the motor operated safely.

On the other hand, I spent some time setting up the radio mechanism for the drone. This required me to set up the radios by installing the correct firmware, setting up its configurations, and interfacing with them to read data. Once these aspects were figured out, we were able to send data between the radios. The next step was to set up the radio such that one will be connected with the arduino board and one to my laptop (base station). I wrote code such that the arduino can understand the radio and read its values. Once this was set up, I tested the radio by sending messages from both sides of the radio to one another. We saw that the radio was working as expected.

The next steps are being able to interface with the GPS to transmit its values over by radio and connect the computer vision to the Arduino as well.

 

Gaurav’s Status Report for 11/09/2024

This week, I was not able to accomplish as much as I would like to because we ran into many issues trying to recompile the model to work with the Hailo accelerator. I was not able to recompile it on my laptop, so we had to try to compile it on Ankit’s laptops but that came with its own set of errors. Currently we are stuck on the recompile step, but once that is ready everything else should work.

Our progress is on schedule. We have talked to the professors about what we will have for our interim demo, and we should definitely have that ready in time. Our drone is very much on track and having the vision component working soon is good for our overall progress.

By next week, I hope to have the model recompiled for the Hailo accelerator and a chassis designed to put the Raspberry Pi on the drone. I also hope to have an idea of how to convert the drone battery power into a current/voltage that the raspberry pi can accept.

Ankit’s Status Report for 11/09/2024

This past week was very productive. We started off facing an issue where one of motors seemed to not be spinning. After some diagnosis we realized that this motor was probably dead from an earlier test where it had smoked up, causing us to have to replace the motor. Then, I changed the formulation of our Kalman Filter based state estimation as Kalman Filters require a prediction step that is very hard to encode for through quadcopter dynamics. Instead, I changed to a Complementary Filter, which is a measurement only approach that combines gyro and accelerometer readings essentially through a linear sum. After some quick tuning, this worked very well. Finally, i added in a safety mechanism where if the throttle stick is set to zero, then regardless of the PID outputs the motors stop spinning. This is necessary to ensure our safety during PID testing.

This week will finally all be PID tuning. We will setup a mechanism by which the drone is hanging from a string, fully unsupported, turn on the drone, and hopefully see if balance as we tune the PID gains. I am a little worried about the little time we have to tune both the pitch and roll gains, considering our interim demo is next week, but I do think it is possible.

I believe I am a bit behind schedule as I was hoping to get to PID tuning last week but because of the motor issue and the reformulation of the Complementary Filter, I wasn’t. That being said, I have set aside this entire week to work on PID tuning for the drone, so I think if I put my head down I can make it work.

Team Status Report for 11/09/2024

The most significant risks that could impact the success of the project are not being able to tune the PID in time for the interim demo. Currently we have the drone motors running, and partially tuned. However we still have to test the drone in the air and on the harness.

No major changes were made to the existing block diagram. We are all working on our individual parts and collaborating when necessary.

There is no updated schedule, everything is on track to finish in time.

Ankit’s Status Report for 11/02/2024

This past week was quite productive. I spent the week building the electronics of the frame and getting the motors tested. I started by writing some code to control the ESCs via Arduino PWM and then created a small test setup to verify the motor speed could be varied. Then, I got our transmitter to transmit to the arduino through a receiver, and while our drone is autonomous, this setup will be used to test our PID manually. Finally, me and Bhavik soldered the ESCs to the power distribution board built into teh frame and tested all four motors together. The entire setup works and we now have a drone with motors that spin.

This upcoming week, I will spend time actually tuning the PID and merging in the kalman filter state estimation. The goal is to tune the PIDs for roll and pitch to teh point where we can have stable hover flight, with throttle controlled by the transmitter. I think this will be an ambitious goal, but we should be able to accomplish it.

After the intense progress of this week, I believe I am on track.