This past week, we’ve made some progress on our project and considered the ethical ramifications of our fail cases. We’ve been able to get a much more accurate 3D trajectory tracking mapping, however, we’ve come to realize that we may need more than one pre-throw to accurately show the motion and overrule garbage data.  We were also able to figure out how to make two pins addressable at once on our IMU set up, and worked further on refining the simulation as the project suffered from under-sensitivity from the inputs and some miscalculations.

From our ethics discussions, we realized that we had some potential safety issues and inadvertent marginalization of certain groups. One thing we hadn’t really thought about was how our project, originally geared towards children, could handle abuse/damage. It’s very likely that a child would throw something much larger than a small ball at our project and that the resulting damage could mean injury to the child in some malfunction. Furthermore, this project excludes those with disabilities (e.g. especially those unable to move their arms and throw an object), and so in future iterations, we’d like to consider an additional component that helps facilitate the throw.

This week, Luca was able to make a simulation with basic functionality. The simulation shows the ball landing at the actual landing location and the robot moving to the predicted landing location. Once the ball has landed and the robot has reached the predicted landing location, the robot travels back to its starting location. The robot is capable of determining whether or not it caught the ball. If the ball is caught it shown being carried by the robot when the robot travels back to the starting location. The catching range of the robot is shown as a circle around it with a 2m radius, since this is the catching range of the robot. Additionally, the distances, the sizes of the objects as well as the speed of the ball and robot moving are all drawn to scale.

Here is the code he has written in the last few days:  https://github.com/DanBarychev/MPU9250-3D-Kalman-Filter/blob/master/POV.pde

Here are some recordings of the simulation: https://drive.google.com/open?id=1G5XynpOfx8swegMj2C3eaOucQ7vf324s

One of the recordings shows the robot catching the ball and bringing it back the robot’s starting location. The other shows the robot missing the ball when it is thrown out of range.

Dan spent a considerable amount of time trying to solve issues related to adding in the second IMU and was able to eventually resolve this with Jiaqi’s help. He also soldered together all of the IMUs, wires, and devices to create a useful container that will go on top of the slove.

Hana realized that there were some errors in the calculation of the ball trajectory and landing location and has been working to remedy the situation. There is also some difficulty in the fact that there is nothing to verify the results being presented for now, so we won’t know how accurate the simulation is until we also take in the final landing location from the real, in-person grid.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *