Bhaviks Status Report for 10/19/2024

The majority of the week went into working on the design review report. For this report, I worked on various sections of the paper including introductio, use case, architecture and many more. I spent quite some time with my team to understand the feedback received from the design presentation and discussed if any changes were required to the original plan. We incoorporated any needed feedback and wrote up a comprehensive design report.

I also put some time looking into interfacing with the xbee radios to begin communication with the drone. In order to test any of the code I wrote, I need the hardware first. Therefore, I placed orders for two radio devices (one for the drone and one for the base station – my laptop). I am on schedule but will need to put in some extra effort next week to make sure my code is working as expected. Additoinally, our previous order has come in and I was able to get a small start on looking into buildling the actual drone frame. This work will continue next week with the team to finish building the frame and attatching the motors for testing any initial code.

For the next week, Gaurav and I will work closely to get the vision system working on the Kria. We will uplaod the model weights to the kria and hopefully get it running. We will also build out the frame as a team to get started on testing if possible.

Ankit’s Status Report for 10/05/2024

This past week, I continued work on tuning the process and measurement noise for the Kalman Filter and working to interface with the motors. Specifically, this week I looked into the PPM interface and realized that this is quite similar to a PWM implementation. I found some open source code online that generates the PPM signal and used an oscilloscope to verify that the pulses generated make sense. I wasn’t able to test with our actual drone ESCs and motors because the parts have been delayed in arriving but once those come this week I should be able to test.

I will admit I am a bit behind right now. I’m having some trouble understanding how to tune the process and measurement noise for our Kalman Filter. Measurement noise I understand can be empirically calculated through data analysis of our IMU measurements but process noise is something I’m struggling to figure out how to tune. Additionally, our parts not arriving this week has really set us behind. We are hoping to get parts next week.

This week I will focus on hardware testing. We got confirmation from the ECE receiving desk that the parts should arrive this week so I am hoping to actually do the testing this week. If we can get the Kalman Filtering working with drone motor control by end of Fall Break, I’m confident we will be back on schedule

Team Status Report for 10/05/2024

Currently, the most significant risks that could jeopardize the success of the project is integration. Currently we each have our own parts of the project that we need to work on individually so getting that working together will be our biggest struggle. Also making sure we can keep on each other schedule, for example that Ankit can complete the drone by the time that Bhavik has completed training the model and I have finished setting up the vision model on the KRIA. To manage these risks, we are keeping each other accountable and up to date so that we can all finish on time.

No changes were made to the design of the system. Our progress is currently on schedule. We are currently trying to finish our individual components for full integration sometime soon.

By next week, we hope to have the drone partially assembled assuming the parts get here in time and have the balloon detection algorithm working on the KRIA in some capacity. We may have to alter the model to account for the distance from the target but for now we want the KRIA to simply detect the target object.

Gaurav’s Status Report for 10/05/2024

This week, I created an example Vivado project and got the complete workflow working. I also started on getting the Vision AI working, and I plan to finish that by the end of this weekend. I also worked on the Design presentation with Bhavik and Ankit and helped finalize the design and order all the parts.

I am slightly behind the schedule because I was hoping to have more of the toolchain working on the KRIA itself. However, because I do not have a good way to connect to the KRIA until the parts come in which they have not yet. However, once the adapter comes I will be able to flash the SD card with the linux image and test the vision AI on the board itself.

Next week, I hope to have the camera connected to the KRIA and a basic vision model working. I also want to have tried using Bhavik’s parameters to see if that can identify the target (which is a balloon).

Bhavik’s Status Report for 10/05/2024

In the beginning of the week, I spent time with my team to finalize our design presentation and make any changes required for the presentation. Once we completed the work required for the design presentation, I began working on our path planning algorithim. I wrote up sudo code for our lawn mowing algorithim and the various states we will have in our drone. The next step would be to write the code out in Arduino. In order to do this, I need to first get a good understanding of the hardware parts we have and understand how to interface with them. I began by looking into how to interface with our radio and wrote up basic tests benches in Arduio that I can use to verify my knowledge and set up the radio correctly once it arrives.

I also began looking into how the altimeter and the GPS will send signals to the arduiono board. For path planning, we need to make use of these signals to determine the drones current position and determine its next step. I wrote up some basic test benches to verify these components once they arrive in our order.

On the computer vision front of the project, we ordered the various parts required for testing. Once I get a hold of a testing carmera and the testing balloons, I plan to set up a testing structure to test the accuracy of the trained model and verify that it can detect the balloon up to 20ft. I will do this by setting up a camera to a stand, and using a tape measure to walk back 20ft away from the camera. Then, we can position the balloon in various places of the frame to make sure it is able to detect it. We can also place other random objects in view to make sure the model would accurately ignore them.

Gaurav’s Status Report for 09/28/24

This week I was able to flash petalinux onto the KRIA, as well as read through more of Varun’s documentation on how to setup Vitis AI on the KRIA. With the group, I met with Ankit and Bhavik to finalize the entire design as well as get ready for part ordering. This weekend, I will be ordering a KRIA compatible camera and setup a basic vision model, as well as ordering other hardware to connect to the KRIA. Much of the work I did this week was reading documentation and understanding the tools available to me and checking that I can connect to the KRIA.

My personal progress is on schedule. I was hoping to get more of the Vitis AI setup working, however I knew that it may take longer than expected and compensated as such with my plan to complete my work.

By next week, I hope to have a basic vision model running on the KRIA and have it connected to the camera or have the camera on the way.

Team Status Report for 09/28/2024

Gaurav did Part A, Ankit did Part B and Bhavik did Part C

Currently, the most significant risk for the team is the viability of our Kalman filter approach to getting Euler angle measurements that don’t drift. In order for our PID to work, we need to have reliable 3DOF measurements and if we cannot correct for the natural integration drift of our gyro, we will not be able to get a stable drone. We believe we have found good resources online that walk us through the process of calibrating our noise in our Kalman filter and they use the same cheap IMU we are using, so we have confidence that we can solve this. Additionally, we are currently worried about getting high-fidelity measurements from our GPS as this depends on weather conditions outside, but there are ways to mitigate this such as setting up RTK base stations. We will have to empirically see if this is necessary based on outdoor testing.

We additionally decided to descope a lot of the custom components we were going to build. We decided that instead of designing and 3D printing our own chassis, we would buy an off-the-shelf carbon fiber frame. This will save us prototyping time and also will be a lot more durable than any 3D printing solution. Additionally, we decided to scrap the custom PCB in favor of a breadboard solution which should save us time in terms of prototyping for a PCB solution.

Part A:

Our product will meet a search and rescue operation need. This has extremely large impacts on safety and welfare. In terms of safety, this product will help search and rescue ships canvas a large amount of area in a much faster time. This means that search and rescue teams can deliver supplies and medical aid much faster and save lives. This will also satisfy the basic needs of people because in lower-income areas where they may not have access to robust search and rescue equipment, this drone will help teams maximize the resources they do have by locating the target much faster and more cost-effectively.

Part B:

With regards to social factors, our low cost solution enables increase in safety regardless of income level of the region. By focusing on a cost effective solution, we are enabling even impoverished regions that border large bodies of water to provide search and rescue. Communities are responsible for the safety of their people and as such, our product is enabling them to provide that regardless of resources that exist in the region

Part C:

Our project addresses the specific need of economic factors because we aim to reduce the overall cost and resources required to perform search and rescue operations. In traditional search and rescue missions, large teams, expensive equipment like helicopters, and significant fuel consumption all contribute to a high operational cost. Therefore, by including an autonomous drone-based solution, we can significantly reduce these costs. Out autonomous drones will allow these rescue missions to include fewer people and reduce the need for equipment such as helicopters. Additionally, the production and distribution of the drone will be open source and can be easily used by any team. Since the cost of the drone is also cheaper compared to traditional drones, it makes it easier for teams in lower-income areas to utilize.

Ankit’s Status Report for 09/28/2024

This week, I worked with my teammates to finalize most of the mechanical and electrical design of our quadcopter. We decided to scale back slightly the scope of our design. Instead, of 3D printing a frame, we decided to buy one instead and we are no longer designing our own PCB, but rather using a breadboard instead. Power distribution will happen through an off-the-shelf PCB. I also continued to refine the Kalman Filter we are using in order to provide more stable 3-DOF measurements and began to interface with the ESC + motors to prove that we can control the motor speed through the Teensy.

I think I am a bit behind schedule. I was hoping to have proven that we can control the speed of our motors and receive high fidelity measurements from our GPS this week, but because of other work and issues with connecting to our GPS, I was not able to hit this target. As a result, this upcoming week, I hope to finalize our plan to have high fidelity control of our drone motors, get GPS measurements streaming to our teensy, and get the radio receiver working as well. This will be a long endeavor as I will have to learn about the PPM protocol and figure out how to format those packets from the teensy

Bhavik’s Status Report for 09/28/2024

During this week I spent time learning about computer vision, more specifically object detection. I took time to understand how Yolov8 works and how we can utilize it for our project. For the MPV we will be utilizing a balloon as the object to find. Therefore, I began by identifying a good dataset for training our Yolov8 model. I had to look in various datasets and identify which onces have a diverse set of images and large amounts of training data. Once I found the dataset, I realized that Yolov8 doesn’t support the lableing format used by the dataset (via format). Thus, I researched methods to convert a VIA format to COCO format. To verify correctness of the dataset, I wrote a script to viasualize the dataset after converting it to COCO format.

I then trained a model with 10 epochs to verify it is functioning as expected.  I observed the loss and the testing images and noticed the trend seems correct. Then, I trained a model by running for 50 epoch. Once trained, I visualized the model’s output on testing imagse and noticed that the bounding boxes were not always good.

Going back to the dataset, I realized that the dataset was made for image segmentation and contained semgentation masks instead of bounding boxes. Therefore, I wrote an another script to convert from segmentation masks to bounding box lables. I retrained the model yet again for 50 epoch with the converted dataset. Once tested, I noticed good results. Next step is to write a program to intake live video and test the model in the real world.

Example of Yolov8 Nano model output (noise introduced to image to make it harder for detection):

Ankit’s Status Report for 09/21/2024

This week I worked with Bhavik to get the MPU6050 up and running and investigate some basic Kalman Filter logic to fuse its gyroscope and accelometer data together to get precise 3-DOF measurements for the drone. I started by soldering header pins to the MPU6050 then wired it up according to a schematic we found on a SparkFun guide. Then, we loaded some basic example code that showed us how to interface with the device via I2C and poll raw angular velocity and linear acceleration data. We could then do some basic trig to figure out the euler angles using the linear acceleration data and do some basic integration to convert the gyro data into euler angles. We plan to continue work on a Kalman Filter based approach to fuse these measurements together along with a dynamics model of the quadcopter in order to get precise, non-driftin measurements of the euler angles of our drone at all time. I also worked with Bhavik and Gaurav to finalize a parts list for the drone and place some preliminary orders for the devices we will need (sensors, Teensy, drone motors + ESCs, etc.)

I am currently on schedule and will be focusing on figuring out more of teh Kalman Filter + tuning the process and measurement noise in order to get very accurate readings of our euler angles