Team Status Report 4/3/2021

This week, we made a lot of progress on our independent work streams. The housing prototype has been assembled and is ready to be used for testing. Currently JP, is testing the cup detection algorithm with the camera positioned how we would like for our demo. The housing has the support on the rear panel for the touch screen UI that Juan is implementing, and space to house the internal launching mechanism that Logan is building. We believe it is sufficient enough for our demo, but there will likely be small changes as we continue to test.

We are nearing a point where we can test the integration of our independent workstreams. For our demo, we plan to integrate mostly the cup detection and the launcher, since those are the most difficult parts. By the end of this week we should have the following ready to demo:

  • Detecting cup rings
  • Filtering out erroneous ellipses
  • Generating 3D point locations of each detected cup
  • Mapping detected cup to calibration map (linking cup position to cup number 1,2,3 etc.)
  • Rotating launcher to aim at specific cup
  • Triggering ball launching at specific cup

JP Weekly Status Report 4/3/2021

This week I finalized and rigorously tested the cup detection algorithm. With the camera about 3 feet high, angled down about 25 degrees. It can accurately detect the 10 cups in the a pyramid formation in under 1 second. It then uses the data from the depth map to project the 2D image into real world 3D coordinates. I placed the cups in predefined positions so I could test the accuracy of the coordinate generated. It was accurate within +/- a few millimeters. This was a major milestone for this part of the project. I then cleaned up the repository to be fit for repetitive testing and validation of cups in various formations.

Here is a picture of one of my early testing setups:

I also started to put together the first prototype of our housing with the camera on it. Here is a picture of the housing so far:

Creating the calibration map to link the 3D coordinates to specific cups (as seen in last week’s status report) took a little longer than expected. I prioritized getting the cup detection algorithm to our MVP to make testing and demoing smooth and robust. It will also make the calibration much easier to test this week as I focus on that section of our project for this coming week.

JP Status Report 3/27/2021

I spent the first half of the week converting our build engine to CMake from MSBuild. I was testing our application on my laptop until the Jetson arrived, so MSBuild from Visual Studio sufficed at the time. CMake allows us to cross compile for Windows x64 (laptop) and Linux ARM64 (Jetson Nano). This took a lot longer than I expected due to dependency issues with opencv specifically. Now I can test the camera and ellipse detection on the Jetson. The application runs smoothly on the Jetson when I ran our testbench on it. The second half of the week I spent enhancing the coordinate transformations so that I can send that data to the Arduino. Using the data from the camera intrinsics and extrinsics, as well as the depth map data, I can map 2D image coordinates, to the 3D position coordinates so the launcher knows where to aim relative to the camera’s position. Below is a picture describing image projection from a 2D image to a 3D space.

 

I also started to derive the calibration technique to help the UI pinpoint which cups in a given formation have been identified. We decided to use a map with dots along the 4 edges. These dots will connect to form horizontal and vertical lines that intersect. These intersections pinpoint predefined cup locations for specific formations. This is how our application will identify specific cups to select in the UI. Below is an example of our calibration map.

This coming week I plan on working almost exclusively on testing our detection algorithm with the Jetson Nano. The housing should be dry enough to mount the camera on it so I can do some real-time testing on the table we will be using. Previously I have been using a tripod to mount the camera, but now I will get a more accurate representation of the exact camera angle we will be taking images from. By the end of the coming week our application should be able to:

  1. take images
  2. detect cups
  3. get 3D cup coordinates
  4. link detected cups to predefined cup targets for selection