Status Report for 11/9
For the camera pipeline on Jimmy’s end, a lot of progress was made this week getting a proof of concept implementation of the kalman filter, where it is able to project out estimations of where the ball is flying towards (demo can be seen on individual report). However, a few issues did come up in the process regarding accuracy, some oscillation behaviour with the estimations, and the fact that this also needs to be implemented with both X, Y, Z coordinates, rather than X, Z which is the implementation right now. However overall, this is a really good physical evidence that a kalman filter would really work and give us the best estimate as opposed to the sole physics model. One pivot that was made was scrapping the YOLO model as it was too resource intensive and would lower the FPS of the camera video output, opting for a filter / colour tracking approach (more discussed in individual report). There are some drawbacks to this approach, as we will need to fine tune the colour HSV ranges to detect depending on the lighting conditions, and the camera colour range. Additionally, since we have pivoted to the raspberry pi, some work will be done in the following week between Jimmy and Gordon to make sure that the bringup of the raspberry pi goes as smoothly as possible, making sure the loading of the camera pipeline and the object detection and kalman filter is functional on the RPi.
After consulting with Tamal and Nathan and receiving their approval with switching from KRIA to Raspberry Pi (RPI), we were able to place an order for an RPI5 and officially move into doing it all on the RPI. We repartitioned our roles, as without the KRIA Gordon has less work to do in terms of hardware integration. Gordon will still be handling the connection between OAK-D to RPI and RPI to robot, but now has a little more liberty to help out with Jimmy or Josiah on other tasks.
For Gordon’s RPI setup and other hardware integration, we were able to receive the RPI and could start the setup immediately. My personal status report goes into a lot more detail, but essentially we were able to successfully setup the RPI after getting through a few unexpected bumps, and are now able to SSH into the Pi directly from anyone’s laptop. Here are the steps to SSH:
- Connect Pi to CMU-DEVICE
- Connect laptop to CMU-SECURE
- In terminal on laptop, type username + passwd
With the help of Jimmy, using Conda we were also able to setup the Depth AI library and other dependencies needed to run the code on the Pi. For next steps, Gordon will do some research into how the code runs and look into how we can transfer data from the camera output into the robot system. Specifically, find a way to get rid of the Arduino that Josiah is currently using to test the robot, as if we can get rid of that middleman then we save on latency.
On Josiah’s end, the construction of the robot is practically complete. Controlling the stepper motors is possible with GRBL firmware loaded on the arduino, and a Universal G-Code Sender hosted on a laptop. It’s promising seeing it all put together at last, despite some frustrating circuit-level debugging. It turns out that the motor drivers–mounted on the CNC shield–are highly sensitive to electrical interference, and just a light touch can grind proper motor rotation to a halt. Additionally, a second cup mount was designed and printed to accommodate for a belt bearing that stuck out and made the previous cup mount incompatible with the system. Future work is further familiarizing with G-Codes and figuring out “inverse kinematics.” I place this term in quotes, because the task is far simpler compared to more sophisticated robotics applications.