Team Status Report- 5/1/21

This week we aimed to complete our integration and be able to run our full stack successfully on the drone. However, several issues presented themselves on our way to trying to complete this goal. Firstly, while we could issue positional commands to the drone in order to move to certain positions, we needed to understand how it’s own internal local position coordinates mapped to real world coordinates. After conducting tests moving the drone across different axes and sending the local position estimates through the Rpi to the TX1, we noticed some startling issues. Firstly, even while stationary, the positional estimates would constantly drift within +/- 3m in the x and y directions. Finally, the initial x, y and z estimates seemed to be completely random. The flight controller is supposed to initialize x, y, z to (0, 0, 0) at the location at which it turned on. However, each of these coordinates seemed to be initialized to anything between -2 and 2 meters.

We tried to combat these inaccuracies by downloading a different firmware for the drone’s flight controller. Our current Pixhawk firmware solely relied on GPS and IMU data to estimate local position. On the other hand, the Ardupilot firmware allowed us to configure an optical flow camera for more accurate local position estimate. The added benefit of this would be without a need for the GPS, we could even test the drone indoors. This was especially important since the weather this week was very rainy, and even when the skies were clear, there was a considerable amount of wind that could have been interfering with the position estimates. Unfortunately, there were many bugs with the Ardupilot firmware, namely that the GPS outputted a 3D fix error when arming despite the fact that we had disabled the requirement for GPS when arming. After troubleshooting this issue, we eventually decided to switch back to the Pixhawk firmware and see if we could fly despite the inaccurate position estimates. In doing so, the radio transmitter for manual override somehow lost binding with the drone’s flight controller, but we managed to address that issue.

In addition to the local position issues, the other major problem we needed to debug this week was our motion planner.  The issue was the tradeoff between the speed and accuracy of the motion planner. Using scipy.optimize.minimize methods resulted in very accurate motion planning, but the motion planner would take upwards of 18 seconds to solve for a plan. Reducing the number of iterations and relaxing constraints, we could optimize this down to 3 seconds (with much more inaccurate plans). However, this was still too much of a lag to accurate follow an object. Another approach we took was acquiring a license for Embotech Forces Pro, a cutting edge solver library that is advertised for speed. While Forces Pro would solve for a motion plan in less than 0.15 seconds using the same constraints and optimization function, its results were less than ideal. For some reason, the solver’s results had a reluctance to move backwards and yaw beyond +/- 40 degrees. Eventually, however, we were able to create a reliable motion planner by reducing the complexity of the problem and reverting back to a simple model we made a couple weeks back. This model kept the yaw and elevation of the drone fixed, only changing the x and y position. The results of this motion planning and full tests of our drone in simulation can be found in the “recorded data” folder in our Google Drive folder.

Unfortunately, despite the success in motion planning, and finalizing a working solution in simulation, we were not able to execute the same solution on the actual drone. We tried pressing forward despite the inaccuracies in local position, but noticed some safety concerns. One simple test we used was we manually guided the drone to cruising altitude. Then, we switched off manual control and sent the drone a signal to hold pose. However, rather than holding the pose, the drone would wildly swing around in a circle. This is because due to the local position drifting while the drone was standing still, it would think that it was moving and try to overcompensate to get back. Because of this, we landed the drone out of concern for safety and decided to use simulation data to measure our tracking accuracy. However, in terms of measuring target detection precision and recall, we used data collected from the drone’s camera as we were manually controlling it.

Leave a Reply

Your email address will not be published. Required fields are marked *