Team Status Report 12/07

This week (and the last week) was all hands on deck with integration. We focused a lot on actually mapping out the system, and setting up the coordinate grid. The translation between camera into real world distance was made, and we also moved our system into the 1200 wing for testing. Our backdrop arrived, and we were able to set it up in the back of 1207. Since we are having a system where we need to capture the full arc of the throw, we found the optimal camera configuration that includes the robot, backdrop, and the full arc of the throw.  

Once setup was complete, the three of us all worked on setting up the complete pipeline for the 2D version. The third dimension is coming from depth in the camera, which we had to put on hold. More information about that in Gordon’s individual status report. We had created a file in which all the components (camera with detection and kalman interfacing, connecting to the Pi, translation into G-code for the arduino) were able to run, but with testing found out that the Raspberry Pi was not powerful enough to handle all of it at once. More information in Gordon’s individual status report. We ended up just using Jimmy’s laptop to run it, and it was able to perform very nicely. We actually were able to get the full pipeline working, and have multiple recordings of us throwing and the system catching the ball. 

For next steps, we are working to integrate a second camera to provide the missing dimension. The decision of giving up on the depth capability of the depth camera and getting a second camera setup was made only after meticulously attempting to debug and make workarounds on how the depth camera could not handle everything running. Even using the laptop instead of the pi, there simply was not enough processing power to get enough frames to reliably capture the ball’s depth coordinate. Specifically, the ball’s location could be tracked, and we moved the Region of Interest to those coordinates and requested the depth of the ROI. But in the short time it took to actually get that request fulfilled, the ball had already moved out of the region. We tried all sorts of methods to move around the ROI or make it bigger, but everything led to a buggy implementation where the depth coordinate simply could not be reliably generated. We also tried getting rid of the ROI in entirety and just look into if they can return depth coordinates at a specific point, but even that was unsuccessful. We were able to get the ball coordinate when it was moving at slower speeds, but in order for it to matter it needed to capture depth coordinates of an in-flight ball, which it couldn’t do. 

We have tested with a second camera positioned facing the throw, and have good reason to believe that we can modify our code from the first camera to successfully integrate the second camera. This is because the only force acting on the x and z axis of our throw is air resistance, so the detection and Kalman models we already have for the x axis should be able to easily convert to the z axis. Jimmy successfully wrote up the code, and with some preliminary testing we found it to work pretty well (more details in Jimmy’s individual report). We are right here at the end, as once the second camera gets integrated, we will have the missing dimension and we will be done. 

 

List all unit tests and overall system test carried out for experimentation of the system. List any findings and design changes made from your analysis of test results and other data obtained from the experimentation.

For the camera, we did testing on if detection and Kalman filters would work for a 2D plane first, off a recorded video. Then we moved into testing how well those same functions would run with live camera feed video. After we got 2D working, we ran the same tests on 3D to see if the FPS and functionality is still adequate. From 2D testing, the Pi showed promise and was able to detect pretty well. But once we switched to 3D, testing results were not good enough with the Pi. This led us to abandon the Pi for a laptop, as explained above and also in Gordon’s personal status report. For further 3D testing, we determined that the single depth camera was also not adequate to our standards, and made the design change to use 2 cameras. That was explained in this status report as well. 

For the robot, we tested the movement of the robot. We tested moving in our full coordinate range, and verified that it can move the whole range. We also ran some tests on the timing of how quick it could move from one end to another. Then we did tests on if the robot could receive multiple movement commands in rapid succession. The robot was able to pass all these tests to our standards.

Leave a Reply

Your email address will not be published. Required fields are marked *