Siddesh’s Status Report- 3/13/21

This week I first sent test videos to Vedant for him to test out his target identification from a roughly 20′ elevation and under a variety of lighting conditions / target motion patterns. I then spent my full effort working on target state estimation. The purpose of target state estimation is to be able to model the target’s current motion at predict their future path. We can’t just identify the center of our target every frame and tell the drone “go to this position”. For one, the computer vision is not guaranteed to be perfect, and the center may not be exactly at the target’s center. Being “reactive” like this would likely create very jerky motion. More importantly, the drone’s flight controller needs to plan out movement at least a second or more into the future, so it would be impossible to simply receive a frame of data and react instantaneously. Thus, the goal is to smooth out the frame-by-frame target data outputted by the CV algorithm and create a model for the user’s current motion that would enable us to predict their motion at least a short while into the future.

In order to do this, I had to first create a test environment where I could generate a “random motion” and simulate the input that would be provided to the state estimator. To do this, I generated a random polynomial to model the target’s x and y coordinates as a function of time. Then, at a specified fps, I would sample the target’s current coordinators and add some Gaussian noise to mimic inaccuracies in the target detection before sending these samples to the state estimator.

For the state estimator, I implemented a Kalman filter from scratch where at each point in time we model 6 things: the target’s x/y position, x/y velocity and x/y acceleration. Every time the estimator receives a new “sample”, it probabilistically updates these 6 quantities within current model of the target’s motion. I then simulated how a drone would move based on the x/y velocities of the estimator’s modeled motion. For some reason, I can’t upload .mp4 files to WordPress but an example of this simulation can be found in StateEstimation1.mp4 in the Google Drive folder. (the black points are the target’s actual motion, the red points are the noisy samples that are sent to the estimator and the green points are how the drone would move based on the estimator output).

The Kalman filter approach seemed to be able to successfully smooth out the noise from the red points, and enable the drone to follow the target smoothly and relatively accurately.  The next step was to simulate this with a more “human” motion rather than a contrived polynomial. The “human” motion has more gradual changes in acceleration and obeys a speed limit (10 mph). In addition to this, I let the target wander around infinitely and centered the screen to the drone position to get an idea of how a video feed coming from the drone would look. An example of this is in StateEstimation2.mp4

After tweaking the parameters, the results seemed very encouraging. The drone was able to follow a smooth motion plan despite the noisy data that was being inputted and the target generally stayed very close to the center of the frame in the simulation. For next week, I plan to make a minor change to the estimator in order to enable it to receive asynchronous samples (rather than at a fixed interval). In addition, I plan to test out the state estimator on Vedant’s results from target identification on the real-life video. Moreover, while I wasn’t able to design a mount to attach the camera and RPi to the drone (since the camera hadn’t shipped yet), I aim to get that finished next week.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *