This past Thursday we had a simulation meeting in which Hana showed us what she had. Since, she was having trouble with calculating the predicted landing location of the ball, I took over the simulation.

 

I was able to make a simulation with basic functionality. The simulation shows the ball landing at the actual landing location and the robot moving to the predicted landing location. Once the ball has landed and the robot has reached the predicted landing location, the robot travels back to its starting location. The robot is capable of determining whether or not it caught the ball. If the ball is caught it shown being carried by the robot when the robot travels back to the starting location. The catching range of the robot is shown as a circle around it with a 2m radius, since this is the catching range of the robot. Additionally, the distances, the sizes of the objects as well as the speed of the ball and robot moving are all drawn to scale.

 

I started from scratch by deriving my own calculations to predict the ball’s landing location using the SUVAT equations and trigonometry. The predicted landing location is calculated from Vx, Vy and Vz at the ball release, the height of the ball when it is released and the  horizontal angle at which the ball is thrown.  I started by calculating the predicted landing location in 2d using Vz, Vy and the height of the ball at release. I then added the third dimension by introducing Vx into my calculations. Finally, I added the horizontal angle component my rotating the predicted landing location around the user by this angle (Z-axis:  vertical axis, Y-axis: horizontal axis (forward/backward),  X-axis: horizontal axis (left/right).

 

Most bugs I ran into occured by predicting the landing location of the ball and also by having the robot travel in different directions. The source of these bugs was the fact that the axes on the screen are inverted. The y-axis of the screen increases downward (like in tkinter) so this made it easy to make mistakes.

 

I have made a lot of progress on the simulation.

Here is the code I have written in the last few days:  https://github.com/DanBarychev/MPU9250-3D-Kalman-Filter/blob/master/POV.pde

Here are some recordings of the simulation: https://drive.google.com/open?id=1G5XynpOfx8swegMj2C3eaOucQ7vf324s

One of the recordings shows the robot catching the ball and bringing it back the robot’s starting location. The other shows the robot missing the ball when it is thrown out of range.

 

Moreover, I received the wires I ordered and they work. I am now able to consistently collect IMU data.

 

Because of all the time I spent of the simulation, my progress with the 3d positioning was delayed. However it was crucial get a basic version of the simulation to work. Tomorrow, I will try to draw the tracings of the AHRS algorithm at different time steps in different colors so we confirm that the most accurate tracings are the most recent ones. I will then work with Dan to improve the 3D positioning and to detect the ball throw using the data from the second IMU. I will work with Dan through Zoom as much as possible since I do not have the soldering equipment to be able to use the second IMU. Additionally, I will continue working on the simulation adding the details and by adding images to make it look better.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *