Gaurav’s Status Report for 09/28/24

This week I was able to flash petalinux onto the KRIA, as well as read through more of Varun’s documentation on how to setup Vitis AI on the KRIA. With the group, I met with Ankit and Bhavik to finalize the entire design as well as get ready for part ordering. This weekend, I will be ordering a KRIA compatible camera and setup a basic vision model, as well as ordering other hardware to connect to the KRIA. Much of the work I did this week was reading documentation and understanding the tools available to me and checking that I can connect to the KRIA.

My personal progress is on schedule. I was hoping to get more of the Vitis AI setup working, however I knew that it may take longer than expected and compensated as such with my plan to complete my work.

By next week, I hope to have a basic vision model running on the KRIA and have it connected to the camera or have the camera on the way.

Team Status Report for 09/28/2024

Gaurav did Part A, Ankit did Part B and Bhavik did Part C

Currently, the most significant risk for the team is the viability of our Kalman filter approach to getting Euler angle measurements that don’t drift. In order for our PID to work, we need to have reliable 3DOF measurements and if we cannot correct for the natural integration drift of our gyro, we will not be able to get a stable drone. We believe we have found good resources online that walk us through the process of calibrating our noise in our Kalman filter and they use the same cheap IMU we are using, so we have confidence that we can solve this. Additionally, we are currently worried about getting high-fidelity measurements from our GPS as this depends on weather conditions outside, but there are ways to mitigate this such as setting up RTK base stations. We will have to empirically see if this is necessary based on outdoor testing.

We additionally decided to descope a lot of the custom components we were going to build. We decided that instead of designing and 3D printing our own chassis, we would buy an off-the-shelf carbon fiber frame. This will save us prototyping time and also will be a lot more durable than any 3D printing solution. Additionally, we decided to scrap the custom PCB in favor of a breadboard solution which should save us time in terms of prototyping for a PCB solution.

Part A:

Our product will meet a search and rescue operation need. This has extremely large impacts on safety and welfare. In terms of safety, this product will help search and rescue ships canvas a large amount of area in a much faster time. This means that search and rescue teams can deliver supplies and medical aid much faster and save lives. This will also satisfy the basic needs of people because in lower-income areas where they may not have access to robust search and rescue equipment, this drone will help teams maximize the resources they do have by locating the target much faster and more cost-effectively.

Part B:

With regards to social factors, our low cost solution enables increase in safety regardless of income level of the region. By focusing on a cost effective solution, we are enabling even impoverished regions that border large bodies of water to provide search and rescue. Communities are responsible for the safety of their people and as such, our product is enabling them to provide that regardless of resources that exist in the region

Part C:

Our project addresses the specific need of economic factors because we aim to reduce the overall cost and resources required to perform search and rescue operations. In traditional search and rescue missions, large teams, expensive equipment like helicopters, and significant fuel consumption all contribute to a high operational cost. Therefore, by including an autonomous drone-based solution, we can significantly reduce these costs. Out autonomous drones will allow these rescue missions to include fewer people and reduce the need for equipment such as helicopters. Additionally, the production and distribution of the drone will be open source and can be easily used by any team. Since the cost of the drone is also cheaper compared to traditional drones, it makes it easier for teams in lower-income areas to utilize.

Ankit’s Status Report for 09/28/2024

This week, I worked with my teammates to finalize most of the mechanical and electrical design of our quadcopter. We decided to scale back slightly the scope of our design. Instead, of 3D printing a frame, we decided to buy one instead and we are no longer designing our own PCB, but rather using a breadboard instead. Power distribution will happen through an off-the-shelf PCB. I also continued to refine the Kalman Filter we are using in order to provide more stable 3-DOF measurements and began to interface with the ESC + motors to prove that we can control the motor speed through the Teensy.

I think I am a bit behind schedule. I was hoping to have proven that we can control the speed of our motors and receive high fidelity measurements from our GPS this week, but because of other work and issues with connecting to our GPS, I was not able to hit this target. As a result, this upcoming week, I hope to finalize our plan to have high fidelity control of our drone motors, get GPS measurements streaming to our teensy, and get the radio receiver working as well. This will be a long endeavor as I will have to learn about the PPM protocol and figure out how to format those packets from the teensy

Bhavik’s Status Report for 09/28/2024

During this week I spent time learning about computer vision, more specifically object detection. I took time to understand how Yolov8 works and how we can utilize it for our project. For the MPV we will be utilizing a balloon as the object to find. Therefore, I began by identifying a good dataset for training our Yolov8 model. I had to look in various datasets and identify which onces have a diverse set of images and large amounts of training data. Once I found the dataset, I realized that Yolov8 doesn’t support the lableing format used by the dataset (via format). Thus, I researched methods to convert a VIA format to COCO format. To verify correctness of the dataset, I wrote a script to viasualize the dataset after converting it to COCO format.

I then trained a model with 10 epochs to verify it is functioning as expected.  I observed the loss and the testing images and noticed the trend seems correct. Then, I trained a model by running for 50 epoch. Once trained, I visualized the model’s output on testing imagse and noticed that the bounding boxes were not always good.

Going back to the dataset, I realized that the dataset was made for image segmentation and contained semgentation masks instead of bounding boxes. Therefore, I wrote an another script to convert from segmentation masks to bounding box lables. I retrained the model yet again for 50 epoch with the converted dataset. Once tested, I noticed good results. Next step is to write a program to intake live video and test the model in the real world.

Example of Yolov8 Nano model output (noise introduced to image to make it harder for detection):

Ankit’s Status Report for 09/21/2024

This week I worked with Bhavik to get the MPU6050 up and running and investigate some basic Kalman Filter logic to fuse its gyroscope and accelometer data together to get precise 3-DOF measurements for the drone. I started by soldering header pins to the MPU6050 then wired it up according to a schematic we found on a SparkFun guide. Then, we loaded some basic example code that showed us how to interface with the device via I2C and poll raw angular velocity and linear acceleration data. We could then do some basic trig to figure out the euler angles using the linear acceleration data and do some basic integration to convert the gyro data into euler angles. We plan to continue work on a Kalman Filter based approach to fuse these measurements together along with a dynamics model of the quadcopter in order to get precise, non-driftin measurements of the euler angles of our drone at all time. I also worked with Bhavik and Gaurav to finalize a parts list for the drone and place some preliminary orders for the devices we will need (sensors, Teensy, drone motors + ESCs, etc.)

I am currently on schedule and will be focusing on figuring out more of teh Kalman Filter + tuning the process and measurement noise in order to get very accurate readings of our euler angles

Team Status Report for 09/21/2024

Currently, the most significant risks that could jeopardize the success of the project are scheduling and training/calibration time. With scheduling, there are many deliverables and many components we need to ensure can work together. This could definitely jeopardize the success of the project if we have underestimated the development time. the other risk is training/calibration time. This time includes training the model and also calibrating the PID controller may take time to make sure it is stable and functions properly. If we are not able to calibrate it properly it could put the rest of the project at risk. These risks are being managed by ensuring the components of the task that could take the most amount of time are being done first and we are all working together to make it easier to deal with hurdles or obstacles. Since the calibration is unique to our drone, it is not possible for us to get around this part of the drone. We have to make sure this part functions properly. With the AI training, our contingency plan is to use an even simpler target that will be easier to spot.

There were some major changes made to the design of the system this week. We switched from the KV260 to the KR260 board. This change was necessary because another team needed the vision board so we switched. This actually reduces our cost as we no longer have to buy the accessory kit for the vision board. Another change that we made was using a pre-made drone chassis instead of 3d printing one. This change was necessary because this drone chassis we are buying has much of the power distribution and design done for us already, with a strong frame made out of carbon fiber. This will be more durable and solve our power distribution problems that we would have run into before. This will incur a cost of 60 dollars, but since this is a necessary cost, it will be mitigated by saving money in other areas. The final change we are making is our use case. We are narrowing down our use case to just over a body of water to detect people that are lost-at-sea which will make the efficacy of our drone much better.

No updated schedule required, everything is on track.

Gaurav Savant’s Status Report for 09/21/2024

Accomplishments:

This week, I gained some familiarity with the Kria platform as well as read documentation related to the Kria boards and using the vision platform. I also met up with Ankit and Bhavik multiple times to discuss our design and work through any changes that needed to be made to make our MVP more realistic.  Earlier in the week, I spent time building the slide deck for the Proposal Presentation. and practicing the presentation.

I am currently on schedule. Since we have decided against the custom PCB and custom drone frame, my main job is getting the PyTorch model working on the Kria Robotics board, which I will definitely be able to complete in time. I will especially be able to complete this due to the abundance of documentation online for the Kria KR260.

In the next week, I hope to have the Vitis AI platform set up and a better understanding on how to use the Vitis platform with the Kria by reading through Varun’s documentation. I also want to have flashed the Linux image onto the Kria and test running a simple program on the ARM chip.

Bhavik’s Status Report for 09/21/2024

At the beginning of this week, I spent a lot of time helping preparing the proposal presentation slides and meeting with team to finalize the idea and use cases. I spent time with Gaurav to create the slides such as problem statement, solution, technical challenges and testing.

After the presentation I reviewed the TA and professor feedback with the team to clarify our use case and make additional modification. I also spent time writing up a Jupyter notebook that will be able to train our computer vision yolo model. I tested the code by running sample training with 5 epochs. I also met with my team once again to write some code for the imu and start reading data from it. We were able to measure and see readings from the gyroscope and accelerometer.

My progress is on schedule. For next week, I hope to complete the design presentation with my team and also work with Gaurav to help flash the FPGA and hopefully load a CNN model on to it.