Ankit’s Status Report for 10/27/2024

This past week I focused on reformulating the kalman filter. As I stated last week, our original formulation for the kalman filter was entirely wrong and so I spent significant time this week rederiving the dynamics equations for the quadcopter and understanding how to properly incorporate the gyro and accelerometer measurement updates into the state estimate. This led to a very productive week but I need to figure out how to properly tune the process and measurement noise variables in order to ensure that the state estimation does not lag the real variable.

This upcoming week I will focus heavily on tuning the kalman filter. Additionally, now that Bhavik and Gaurav were able to assemble the frame, I will be able to test out my code for controlling the drone motors. This is some boilerplate code I have lying around from a previous project. This should allow us to delve pretty heavily into tuning PID which is the next major step of our project and something that needs to happen quickly so we can get to actual flying testing.

 

Bhaviks Status Report for 10/26/2024

The beginning of the week I tested the yolo balloon detection at a long range (20ft and beyond). For distances of 20 ft I ran over 50 trials and noticed that the ballon was detected 98% of the time. The confidence score was also very high at >80%. I noticed if I reduced the confidence threshold there were some false positives. I tested the model using other objects too try to break it and the model continued to perform well. I placed objects such as small balls, phones, hoodies and etc. however the model seems very robust and wasn’t affected by the distractions. Overall, I’m very confident in the models ability to work.

Additionally, this week I worked closely with Gaurav to get the kria set up for the vision model. We began by setting up the vitis AI tool, and attempted to load the yolo model on to it. In order to do that, we began by first quantizing the model. In order to do this, we began by exporting the model into a onnx format. Then, wrote a script to quantize the model (from float to int8).  Using the quantized model we tried to upload it to the kria following the “quick start” guide. However, we realized that we would have to write our own script that utilized the Kira’s xillinx’s library to analyze the model to ensure it’s compatible with the dpu. Once we wrote the script and ran it on the kria, we kept getting errors that the hardware isn’t compatible. Thus we tried multiple different approaches such as using their onnx library and PyTorch libraries to try to get vitas . After discussing with Varun (FPGA TA), we were recommended to look into using an AI accelerated raspberry pi 5.

We looked into the pros and cons of new hardware and found that is would be much more efficient and would simplify our issues.  Working towards fixing this issue in the next week and getting the model working on the new hardware.

Team Status Report for 10/27/2024

The most significant risks that could jeopardize the success of the project are setting up the Raspberry Pi 5 and getting the drone flying with the weight of the components that we have on it. Currently, we are putting a pretty significant load onto the drone, and we want to get at least 10 minutes of flight time on the drone. With peak power, we may be able to get about 10 minutes but we are running some tests soon to see how much that changes with a drone that does not move much while also carrying a full load.

We have made some changes to the design this week. We realized that the FPGA acceleration is not worth the time and the effort of setting up the Kria, in addition to the development time of getting the Kria on the drone to keep using it. I talked to Varun and he told me about an AI accelerated Raspberry Pi 5 that we are planning to switch to. This was mentioned in Gaurav’s Status Report but the KRIA can only manage about 1 TOPS (trillion operations per second) while the AI kit with Raspberry Pi 5 can do about 13 or 26 TOPS depending on the hat that we buy. As such, the increased performance and the reduced form factor makes the Raspberry Pi the ideal platform for our uses moving on. This will cost an additional $100 which we accounted for in our budget by not allocating more than we need. Since we are also getting the Raspberry Pi from the inventory, then the extra cost is extremely minimal for this addition.

We also made some changes to our battery requirements. As mentioned above, we were not sure how long the drone could fly for with a 2200 mAh battery that we are drawing 40 A from. We will run some tests soon to see how much that changes when the motor is not running at peak power but still carrying a full load. We have some batteries we can use to test, then we will order the batteries we need, which is how we will mitigate this cost. We are only buying the battery that we will need for our use case, and using recycled parts for our initial battery tests.

Although we have made some changes, we are still on schedule and no changes have been made to our timeline.

 

Gaurav’s Status Report for 10/26/2024

This week, I repeatedly met with Bhavik and Ankit to decide how to move forward. There were a couple things that we had realized that we wanted to reevaluate

  1. We realized that our battery may not be large enough. We had chosen a 2200 mAh Li-Po battery, but since we needed it to supply 40 Amps, that would mean that we only have about 5 minutes of flight time. So we have decided to resize our battery to get a larger flight time.
  2. We realized that our FPGA acceleration was not worth it.
    Taken from https://www.raspberrypi.com/products/ai-kit/
    Taken from https://xilinx.github.io/kria-apps-docs/kv260/2022.1/build/html/docs/nlp-smartvision/docs/hw_arch_accel_nlp.html

    As you can see from the pictures above, the Raspberry PI with AI acceleration has a significant performance boost, and in a smaller form factor because it is a hat for a Raspberry Pi 5. As such it makes sense to switch to this instead of the Kria acceleration.

We are very much on schedule. Since the issues with the Kria was causing some of our schedule to be pushed back, changing to the Raspberry PI will put us ahead of schedule because we are all much more familiar with the platform.

By next week, I hope to have helped Bhavik run the CNN on the Raspberry Pi as we wait for the AI kit to arrive. I will also help Ankit run some of the controls on the drone to check that it can stabilize itself.

Ankit’s Status Report for 10/20/2024

The week before Fall Break, I spent quite a bit of time on the design report. This included writing a lot regarding the system architecture as well as some of the principal design choices we made regarding the hardware and software of the drone. Additionally, I realized there was a huge problem in the formulation of the kalman filter (essentially in our state update step we were doing a double integration of acceleration to retrieve position, however this is especially error prone to drift and highly not recommended). As a result, I changed the formulation of the kalman filter to include the linear and angular velocity in our state which should make our state estimation a whole lot more stable.

I am currently behind because of this additional work that was needed for the kalman filter but this sort of learning and iteration was expected given that we are designing our own control system and state estimation. I’m hoping to finish up all kalman filter stuff this week then move onto to final integration onto the drone specifically regarding tuning the PID loops. Now that we have hardware in hand, this sort of functional testing is possible and we will be focusing on this over the next several weeks until the demo in November.

Team Status Report for 10/20/24

Gaurav did Part A, Ankit did Part B, Bhavik did Part C

The most significant risks that could jeopardize the success of the project are scheduling issues. Trying to make the first demo is definitely a time crunch, and we have lots of work left to do. The way we are managing the risks is by keeping each other accountable to make sure we stay on track. Our contingency plan for the demo is to just show the object detection working and the drone flying, and have those two subsystems be put together for the next demo.

There have not been any existing changes to the design of the system. Although we have lots of work to do, we should still be able to meet the deadlines we have set for ourselves to make the first demo.

Part A:

The search and rescue drone addresses critical global needs such as usage in disaster search and rescue scenarios. For the target audience, search and rescue teams, time is of the essence and we are able to greatly minimize the time spent searching for the target by introducing our drone into the search process. By using object detection technology and integrating the processing power of the Kria all on the drone, we are able to get rapid autonomous detection of stranded individuals and maximize the area covered in given period of time.

With respect to how our solution caters towards those who do not necessarily have a technical background, our solution will start searching as soon as the drone is turned on. It will run its start sequence, rise into the air, and start the lawnmower search. This makes our product extremely useful for non-specialized teams or SAR teams in remote regions who do not have access to highly specialized equipment or a technically proficient staff. In this way, it scales to meet the global demand for effective, autonomous emergency response systems and greatly improves disaster preparedness around the world.

Part B:

From a cultural perspective, it is the goal of governments (both federal and local) to provide safety for their residents. For coastal communities, this involves providing search and rescue for any people lost at sea or on another body of water. However, a common theme across search and rescue, especially over bodies of water, is that it is an expensive proposition. By creating this low-cost SAR drone, we are hoping to enable communities, regardless of income level, to provide safety for their residents.

The other cultural aspect of our project is that we want to maintain privacy, and so all computations related to the person detection are done onboard the drone. This means no camera footage is saved or offloaded which allows us to maintain the privacy of any people being detected at sea or any rescue crews that might show up at a later period of time.

Part C:

The autonomous SAR drone aims to help find missing people in an ocean environment, and as such, it is important we minimize the disturbance caused to the marine life and ecosystem during operations. We have designed the drone such that it doeesn’t have to interact with the ocean environment at all. Given that all the necessary components are onboard, the drone simply has to fly above the ocean environment without having to physically land or make contact with the water. This allows us to avoid interference with aquatic animals and the ocean ecosystem.

Furthermore, in order to reduce disturbances, we have designed the drone to have a small mechanical footprint made from carbon fiber. This will ensure that the drone is very light and can more efficiently make use of the battery power on board.  Additonally, we have selected efficient yet quite drone motors such that noise pollution is reduced, thus helping avoid disrupting aquatic life. In regards to powering the drone, we have elected to use a recharable battery, this allows us to reduce wastage and reuse the tools hardware for multiple flights.

 

Gaurav’s Status Report for 10/20/2024

These last 2 weeks, I was able to accomplish many of my tasks. The first task I was able to accomplish was the design report. I worked with my teammates extensively in order to complete the design report and accomplish all the requirements. I was also able to get much of my own tasks done. I was able to finish the basic setup and start work on the DPU setup for the Kria. I also setup Vitis AI on my laptop and have been working with the AMD contact, Varun, and Bhavik in order to run our own PyTorch model on my laptop. I also worked on flashing Petalinux onto the Kria so I can connect my laptop to the Kria and upload any necessary files. This is partially working, since I am having some issues getting the internet to work.

Basic block diagram setup
Vitis AI running on my laptop

My progress is on schedule. Since I was able to setup the architecture on the FPGA that can actually accept the object detection model, then if I can resolve the issues I’m having with the Kria and Vitis AI then we can actually test the model. I just have to get the internet running on the Kria so that the Kria can download the necessary package and try running Bhavik’s Pytorch model in Vitis AI.

By next week, I hope to have the Kria board running with the internet and having the necessary Vitis AI files on it. I also hope to have the DPU architecture flashed onto the FPGA fabric so that the Kria is ready to run tests.

 

 

Bhaviks Status Report for 10/19/2024

The majority of the week went into working on the design review report. For this report, I worked on various sections of the paper including introductio, use case, architecture and many more. I spent quite some time with my team to understand the feedback received from the design presentation and discussed if any changes were required to the original plan. We incoorporated any needed feedback and wrote up a comprehensive design report.

I also put some time looking into interfacing with the xbee radios to begin communication with the drone. In order to test any of the code I wrote, I need the hardware first. Therefore, I placed orders for two radio devices (one for the drone and one for the base station – my laptop). I am on schedule but will need to put in some extra effort next week to make sure my code is working as expected. Additoinally, our previous order has come in and I was able to get a small start on looking into buildling the actual drone frame. This work will continue next week with the team to finish building the frame and attatching the motors for testing any initial code.

For the next week, Gaurav and I will work closely to get the vision system working on the Kria. We will uplaod the model weights to the kria and hopefully get it running. We will also build out the frame as a team to get started on testing if possible.

Ankit’s Status Report for 10/05/2024

This past week, I continued work on tuning the process and measurement noise for the Kalman Filter and working to interface with the motors. Specifically, this week I looked into the PPM interface and realized that this is quite similar to a PWM implementation. I found some open source code online that generates the PPM signal and used an oscilloscope to verify that the pulses generated make sense. I wasn’t able to test with our actual drone ESCs and motors because the parts have been delayed in arriving but once those come this week I should be able to test.

I will admit I am a bit behind right now. I’m having some trouble understanding how to tune the process and measurement noise for our Kalman Filter. Measurement noise I understand can be empirically calculated through data analysis of our IMU measurements but process noise is something I’m struggling to figure out how to tune. Additionally, our parts not arriving this week has really set us behind. We are hoping to get parts next week.

This week I will focus on hardware testing. We got confirmation from the ECE receiving desk that the parts should arrive this week so I am hoping to actually do the testing this week. If we can get the Kalman Filtering working with drone motor control by end of Fall Break, I’m confident we will be back on schedule

Team Status Report for 10/05/2024

Currently, the most significant risks that could jeopardize the success of the project is integration. Currently we each have our own parts of the project that we need to work on individually so getting that working together will be our biggest struggle. Also making sure we can keep on each other schedule, for example that Ankit can complete the drone by the time that Bhavik has completed training the model and I have finished setting up the vision model on the KRIA. To manage these risks, we are keeping each other accountable and up to date so that we can all finish on time.

No changes were made to the design of the system. Our progress is currently on schedule. We are currently trying to finish our individual components for full integration sometime soon.

By next week, we hope to have the drone partially assembled assuming the parts get here in time and have the balloon detection algorithm working on the KRIA in some capacity. We may have to alter the model to account for the distance from the target but for now we want the KRIA to simply detect the target object.