Samuel Telanoff Status Report 2/15/25

This week I did more research into the physics simulation portion of our project. I’ve found some Github codebases + research papers/videos that should be beneficial to our algorithm. I also went to the UC basement to play pool with Luke so we could have a benchmark of how many shots it takes us to finish a game of 9-ball. We played three games and averaged around 40-50 shots. We will use this benchmark as a comparison and will hopefully see a decrease in shots taken when we use DeepCue as help.

Additionally, I took some time to make a block diagram of our project for our design presentation. We’ve made a pretty significant change to our project in that we have decided to remove the Nvidia Jetson and instead directly connect the camera/lidar to a computer. So I updated our block diagram + Gantt chart to reflect these changes. I am still on schedule and plan on working on the physics simulation this week. I will have to coordinate with Luke as he will now be helping with the physics simulation since we aren’t using the Jetson anymore. Additionally, we will conduct more benchmark testing with the smaller pool table that we just ordered.

Team Status Report 2/8/25

The majority of our time as a group this week was spent on finalizing and presenting our proposal presentation. Additionally, we have spent some time after presenting to process the feedback we have been given. There aren’t any major changes to the existing diagram of our system, however, there are some things that we are considering now. Luke is looking into tradeoffs between the hardware devices we can use and whether we use them or just have a camera/motion detector connected to a computer. We are also considering changing our MVP to work on the game of 9-ball based on Professor Brumley’s feedback. It would make MVP easier to manage as we would immediately know which ball we have to hit next at any turn.

We have also pretty much settled on the pool table we will be using for our project. It is a 40″ kids-sized pool table (roughly 2/3 the size of a regulation table). We decided on this as it will best fit our budget while still providing us with a working table to use. Additionally, the smaller size will make it easier for the camera to capture the whole board, meaning we can save more budget as we wouldn’t need to buy an extra camera or a more expensive wider lens one.

Ultimately, there is no change to our schedule – we are all on track (if not ahead of schedule) in our respective roles.

Samuel Telanoff Status Report 2/8/25

Most of my time this week was dedicated to learning how to best figure out the physics simulation for the software side of the project. I have been reading different papers and watching YouTube videos from people to try and best understand how the physics engine should work. Linked below are the papers I read and video I watched. I am also debating between using a graph approach or some sort of heat map with intermediate value theorem for our simulation. Additionally, I looked into Amazon and Facebook Marketplace to try and find any cheap pool tables to use for our project. I am currently on schedule and will begin to start coding the simulation next week. I hope to be able to fully simulate a pool shot within the next two weeks.

 

https://ekiefl.github.io/2020/04/24/pooltool-theory/

https://blog.roboflow.com/pool-table-analytics-object-detection/

https://www.youtube.com/watch?v=vsTTXYxydOE&t=1072s

 

Luke Han Status Report for February 8, 2025

This week, I mainly focused on researching suitable MPUs (Microprocessor Units) for our embedded system, specifically evaluating options that can handle real-time computer vision, physics simulations, and wireless communication. Given our project’s requirements—including 1080p camera input, LIDAR motion detection, and a projector for displaying shot predictions—I analyzed three potential choices:

1. NVIDIA Jetson Orin NX Which Offers GPU acceleration (CUDA, TensorRT, OpenCV) for real-time vision processing and AI-based shot prediction. This would be ideal for handling object detection and physics calculations locally, reducing reliance on external compute resources.

2. Rockchip RK3588 which is a cost-effective alternative with an 8-core CPU and built-in AI acceleration (NPU), making it a good balance between performance and price. It supports multiple peripherals (USB 3.0, HDMI, Wi-Fi) and can offload heavy computations if needed.

3. Raspberry Pi 5 + Coral TPU, a more budget-friendly option that requires an external TPU (Google Coral) for AI-based object detection. This is feasible, but it may not provide the same level of real-time performance as the Jetson Orin NX.

Each option has its trade-offs in terms of cost, ease of development, and computational power. Right now, I’m still weighing the pros and cons of these choices to determine which will best suit our project’s needs.

I am currently on schedule and have gathered enough information to make an informed decision. Next week, I plan to finalize my choice, order the selected MPU, and start setting up the embedded system. My initial goal will be to configure the hardware, and hopefully by the end of the week will have started setting up the embedded system