Siddesh’s Status Report- 5/8/21

This week, we just focused on the filming of our video and polishing the project as much as we could. I met with Vedant and Alvin in the lab, and we gathered a lot of footage of the project in action, in addition to collecting logs of all the ROS communications for future processing in simulation. We planned out the length of the video and how we would break down each component of the design. Finally, I recorded my segments at home.

Alvin’s Status Report – 5/8/21

This week, I worked with the team to collect final test metrics and videos for our final presentation and demo video. This involved bringing the drone outside to fly and collecting data on the drone’s pose and the drone’s camera video for testing the motion planner offline and evaluating performance of the target detection.

This upcoming final week, I’ll finish video demonstration results and also record my video component for the demo video.

Vedant Status Report 5/8/21

This week I worked with the team to make final video and poster. I also reviewed for the final presentation.

Next week, I plan to refine the video and poster and prepare for the poster session.

Siddesh’s Status Report- 5/1/21

This week, I worked with the team in the final stages of integration. At the start of the week, I helped Alvin with the motion planner. I downloaded the Embotech Forces Pro license and software for the Jetson TX1 and tried to debug issues with the Forces Pro solver not respecting constraints and outputting motion plans with weird quirks (reluctance to move backwards or yaw beyond a certain angle). We tried to simplify the model, or switch to scipy.optimize.minimize, but the simplified models were very inaccurate and the minimize method took upwards of 17.5 seconds to solve a motion plan. Eventually, we decided to switch to the preliminary motion planner I had come up with two weeks ago, where we only changed the x and y position of the drone while keeping yaw and elevation fixed. This seemed to work well in simulation, and more importantly the drone’s motion was relatively stable, easing some of our concerns with safety.

I also worked on dealing with the local position issues with the rest of the team in lab. I collected local position data for a variety of test paths in order to understand how the drone’s position coordinates mapped to real world coordinates. I also helped in the camera calibration to try and debug issues in the conversion from pixel coordinates to real world distances. In the process of dealing with these issues, I helped the team reflash the drone’s flight controller with the Ardupilot firmware in efforts to make the position data more accurate by adding an optical flow camera. However, we eventually encountered issues with the GPS fix in Ardupilot and decided to switch back to Pixhawk.

The rest of the time spent with the team in lab basically involved debugging an assortment of miscellaneous issues arising with the flight controller, such as a loss of binding with the radio transmitter and a refusal to switch to auto mode mid-flight. Eventually, despite the turbulent weather, we got the drone airborne and hovering in place, but decided against running our full motion planning stack due to wild behavior of the drone. Instead, we focused our energies on getting as many metrics as we could from our actual setup (target precision and recall, video FPS, streaming latency), and getting the rest of the metrics from simulation.

Team Status Report- 5/1/21

This week we aimed to complete our integration and be able to run our full stack successfully on the drone. However, several issues presented themselves on our way to trying to complete this goal. Firstly, while we could issue positional commands to the drone in order to move to certain positions, we needed to understand how it’s own internal local position coordinates mapped to real world coordinates. After conducting tests moving the drone across different axes and sending the local position estimates through the Rpi to the TX1, we noticed some startling issues. Firstly, even while stationary, the positional estimates would constantly drift within +/- 3m in the x and y directions. Finally, the initial x, y and z estimates seemed to be completely random. The flight controller is supposed to initialize x, y, z to (0, 0, 0) at the location at which it turned on. However, each of these coordinates seemed to be initialized to anything between -2 and 2 meters.

We tried to combat these inaccuracies by downloading a different firmware for the drone’s flight controller. Our current Pixhawk firmware solely relied on GPS and IMU data to estimate local position. On the other hand, the Ardupilot firmware allowed us to configure an optical flow camera for more accurate local position estimate. The added benefit of this would be without a need for the GPS, we could even test the drone indoors. This was especially important since the weather this week was very rainy, and even when the skies were clear, there was a considerable amount of wind that could have been interfering with the position estimates. Unfortunately, there were many bugs with the Ardupilot firmware, namely that the GPS outputted a 3D fix error when arming despite the fact that we had disabled the requirement for GPS when arming. After troubleshooting this issue, we eventually decided to switch back to the Pixhawk firmware and see if we could fly despite the inaccurate position estimates. In doing so, the radio transmitter for manual override somehow lost binding with the drone’s flight controller, but we managed to address that issue.

In addition to the local position issues, the other major problem we needed to debug this week was our motion planner.  The issue was the tradeoff between the speed and accuracy of the motion planner. Using scipy.optimize.minimize methods resulted in very accurate motion planning, but the motion planner would take upwards of 18 seconds to solve for a plan. Reducing the number of iterations and relaxing constraints, we could optimize this down to 3 seconds (with much more inaccurate plans). However, this was still too much of a lag to accurate follow an object. Another approach we took was acquiring a license for Embotech Forces Pro, a cutting edge solver library that is advertised for speed. While Forces Pro would solve for a motion plan in less than 0.15 seconds using the same constraints and optimization function, its results were less than ideal. For some reason, the solver’s results had a reluctance to move backwards and yaw beyond +/- 40 degrees. Eventually, however, we were able to create a reliable motion planner by reducing the complexity of the problem and reverting back to a simple model we made a couple weeks back. This model kept the yaw and elevation of the drone fixed, only changing the x and y position. The results of this motion planning and full tests of our drone in simulation can be found in the “recorded data” folder in our Google Drive folder.

Unfortunately, despite the success in motion planning, and finalizing a working solution in simulation, we were not able to execute the same solution on the actual drone. We tried pressing forward despite the inaccuracies in local position, but noticed some safety concerns. One simple test we used was we manually guided the drone to cruising altitude. Then, we switched off manual control and sent the drone a signal to hold pose. However, rather than holding the pose, the drone would wildly swing around in a circle. This is because due to the local position drifting while the drone was standing still, it would think that it was moving and try to overcompensate to get back. Because of this, we landed the drone out of concern for safety and decided to use simulation data to measure our tracking accuracy. However, in terms of measuring target detection precision and recall, we used data collected from the drone’s camera as we were manually controlling it.

Vedant’s Status Report 5/1/21

This week, I worked with Sid and Alvin on integrating all the systems together and debugging all the issues that came with integration. There were many issues we faced such as losing drone communication with the remote controller and the drone radio, flight commands using code broke down which we had successfully tested before, etc. Debugging and getting necessary data for our requirements was everything that I worked on this week. After taking our aerial shot, I helped code calculating the false positives and  false negatives of the object detection code. everything together and debugging issues with drone communications. We faced a lot of strange setbacks this week. In the previous week, we were able to easily control the drone with remote control and even control the drone via code.

For next week, I am swamped with class work and most probably will work a little on trying to debug why the code functions sent for flight are not working.

Alvin’s Status Update – 5/1/21

This week, I worked with Sid and Vedant on integrating everything together and debugging issues with drone communications. We faced a lot of strange setbacks this week. In the previous week, we were able to easily control the drone with remote control and even control the drone via code. This week, both of these features broke down: remote control of the drone suddenly stopped working, and the drone ignored our flight commands through code.

Besides debugging, Sid and I set up a thread-based motion planner to account for slow planning times. Our drone requires motion commands at 2Hz, or one message every 0.5 seconds. Our motion plan originally took upwards of 17 seconds to solve for, and after various tricks with reducing planning horizon and floating precision from 64-bit down to 16-bit, we achieved planning on the order of 0.7 seconds. With our thread-based plan generation, the main planner can stream a previously generated trajectory at the required rate as a separate thread solves for a new trajectory.

Overall, the resulting plans from these flat system dynamics still resulted in somewhat shaky behavior, and we eventually used Sid’s more stable planning over x/y space.

For the upcoming week, I am completely booked with other classes, so we may end up sticking with what we have: all functional systems in simulation, but only manual flight on the physical drone. If we get lucky and the drone accepts our autonomous control, maybe we’ll have a full product.