Team Status Report 4/19/2025

This week, our team made steady progress in preparing our systems for the final demonstration. Luke focused on enhancing the camera system by disabling the auto-focus and auto-blur features that were causing instability during fast motion capture. He made progress on motion detection capabilities through frame-difference analysis, though this remains a post-MVP feature. After team discussion, we collectively decided to deprioritize the Raspberry Pi web server implementation to concentrate on core deliverables.

Samuel achieved a major improvements in optimizing our shot simulation algorithm by implementing scipy.spatial.cKDTree for collision detection. This change improved the simulation speed by an impressive 600% by reducing the computational complexity from O(n) to O(log n). Alongside these technical improvements, Samuel dedicated time to finalizing our presentation materials and preparing demonstration assets.

Kevin made improvements to our ball categorization system by transitioning from square HSV sampling regions to circular ones, significantly increasing accuracy. His work on differentiating between similarly colored balls using HSV gradient analysis has brought our categorization accuracy in testing. Kevin continues to refine the system to handle the remaining edge cases, particularly in distinguishing between the 1-ball and 9-ball.

Looking ahead, Luke will finalize camera calibration for the demo environment, Samuel will polish the simulation integration and demo flow, and Kevin will complete the remaining refinements to ball identification. The team remains on schedule for our final demonstration, with all core systems operational and only minor tuning remaining. While motion detection has been deferred to post-MVP development, our primary systems are performing at target levels, with ball categorization currently achieving 95% accuracy and expected to improve further with final adjustments.

Kevin Kyi Status Report 4/19/2025

This week I kept working on the ball categorization and tested different techniques for segmenting the ball from the rest of the table. In the previous implementation we were using squared segments which was causing issues when calculating mean HSV values used to categorize. By switching to a circular region around the ball coupled with the HSV gradient visualizer implemented last week, the categorization accuracy is almost perfect. I am still working on refining the categorization between the 1-ball and 9-ball but am making steady progress using the gradient visualizer. I am on track to finish for the final presentation and have been communicating my progress to Luke and Samuel.

 

Luke Han Status Report 4/19/2025

This week, I focused on improving the camera system to support motion detection capabilities. I was able to disable the camera’s auto-blur and auto-focus features. These automatic settings had been interfering with consistent image quality during fast motion, but after testing different configurations, I was able to achieve a stable image feed that’s much better suited for detecting movement.

Additionally, after further discussion with my team, we’ve decided not to pursue the use of a Raspberry Pi to host a web server for tracking previously played games.

Finally, while I’ve begun experimenting with motion detection by analyzing frame-to-frame changes between game states, it’s still uncertain whether I’ll be able to fully implement this feature. However, since motion detection is a post-MVP goal, this does not affect our core deliverables. I’ve completed all of my current tasks and remain on track with our overall project timeline.

Samuel Telanoff Status Report 4/19/25

This week, I optimized the simulated shot algorithm and worked on our final presentation slides. For the simulated shot algorithm, I added more concrete vectorization for pocket detection and wall collisions. Additionally, I optimized ball collision checks by using scipy.spatial.cKDTree, which is an optimized form of the spatial hashing class I implemented earlier. Now, instead of using vectorized numpy checks with spatial hashing, which was in O(n), we use the cKDTree (a binary tree representing different locations on the board) to find nearest balls in O(log n). Additionally, SciPy uses underlying C code for the cKDTree implementation, which further optimizes the runtime. With these fixes, I was able to improve the shot simulation runtime by almost 600%.

Before OptimizationsAfter Optimizations

I will be spending this next week working on the final presentation, final report, and getting everything ready for the final demo. I am still on schedule.

Team Status Report 4/12/2025

This week, our group’s primary focuses were on the interim demo and continuing to develop/optimize our project as we get closer to the end of the semester.

Overall, the interim demo went well, we were able to successfully take a picture of the current state of the table, detect the pocket locations, detect and classify ball locations, run our best shot algorithm, and then display the best shot onto the board for the user to see. Below is a photo of what that display looks like, and here is an example of what that looks like in action.

A couple of things that we need to work on after the interim demo are

  1. During the demo, we would manually send data between subsystems, ie Luke would take the picture of the board and send it to Kevin, who would run the CV algorithm on it and then send that data to Sammy to run best_shot and project the shot. We are currently working on a main file to run all of this on one computer for the final demo
  2. Ball categorization is good, but still needs some work. Kevin is currently working on refining the color thresholding for ball categorization, as similarly colored balls are still being categorized as the same (2-ball and 4-ball).
  3. Simulated annealing is still random, so sometimes the best shot algorithm returns a local minimum instead of the global minimum. Sammy is working on optimizing the shot simulation so that the best_shot’s simulated annealing algorithm can run with more iterations.
  4. The camera is set to auto-focus, so we would sometimes need to refocus the camera before starting the whole process. Luke is looking into testing different configurations to achieve a more stable image feed that is always in focus.
  5. Projector/Camera placement is good, but can be better. Luke is looking into the optimal placements for the projector and camera so that they most closely align with the real-world pool table.

Alongside these development/optimizations, we are working on a couple of post-MVP additions to our project. Luke is looking into creating a file that monitors ball movement through the camera so that it automatically starts the whole process when the balls stop moving, instead of requiring the user to click a button. Additionally, the team is discussing whether or not to use a Raspberry Pi to host a web server with a database of a users previous games so that they can see how much they’ve improved.

Overall, there is no change to our schedule — we are essentially at MVP. Everyone is on track for their tasks, and we believe we will have a good working final project by the time final demos happen.

Kevin Kyi Status Report 4/12/25

This week I mainly worked on refining the color thresholding for ball categorization and started implementing a new solution regarding edge/rail detection that could serve as an alternative for pocket detection. Currently, for ball categorization solid differently colored balls are accurately being detected, but similar colored balls(blue 2-ball, purple 4-ball) struggle differentiating. To debug this problem I created a gradient function that shows the different HSV thresholding gradients to give us a visual reference. I also started implementing an edge detector as an alternative to our hough circles pocket detector, but it still needs a lot more refinement. I am on track to have a finalized pipeline soon and only have categorization and pocket detection left to refine.

Samuel Telanoff Status Report 4/12/25

This week, I worked on optimizing the physics simulation to improve a simulated shot’s runtime. During the interim demos last week, we noticed that the best shot algorithm would sometimes give a local minimum instead of a global minimum. I touched on this a bit last status report, but this is happening because simulated annealing is inherently random, and the fewer iterations it goes through results in more random outputs. To fix this, I tried implementing the physics simulation in C++, then using pybind11 to call the C++ simulation in the best shot algorithm file (which is in Python). Unfortunately, due to the overhead created by having to convert data from Python types to C++ types, the latency didn’t improve at all — the runtime was much slower because of the computational overhead.

I then tested three different variations to see which had the best runtime: 1) purely Python, 2) Python best shot & C++ shot simulation, and 3) purely C++. I figured that since C++ is an inherently quicker language than Python, option 3 would have the best runtime. However, through testing, I still found that option 1 (purely Python implementation) had the best runtime. I believe this is because Numpy vectorization in Python is incredibly optimized, and the C++ version of Numpy (Xtensor) is not. Below are the average runtimes of the shot algorithm for the three implementations on 10,000 random board positions (note that my computer was not plugged in, so runtime is slightly slower than the 25ms we were getting earlier with option 1).

Purely PythonPython & C++Purely C++

This next week I will look into different ways to optimize the runtime of the purely Python implementation. I am still on schedule and everything I am working on as of now is pretty much post-MVP.

Luke Han Status Report 4/12/2025

This week, I focused on exploring improvements to the camera system to support motion detection capabilities. A major area of experimentation involved disabling the camera’s auto-blur and auto-focus features. These automatic settings were interfering with consistent image quality, especially during fast motion, so I’ve been testing different configurations to achieve a more stable image feed that’s better suited for detecting movement.

Alongside this, I’ve started preliminary work on motion detection by analyzing frame-to-frame changes between game states. My goal is to develop a reliable method for identifying when and where motion occurs on the table, so that the algorithm can run the physics simulations without user input.

I have also been discussing with my team on weather to pursue the use of a raspberry pi to host our own web server to keep track of previously played games.

I have been experimenting and have made minor progress on these tasks, however, I am on track with my progress.