Team Status Report for April 27, 2024

The most significant risks that could jeopardize the success of the project are to do with the environment of the setup during demo day. Specifically, our system relies on consistent, not-too-bright lighting in order to function optimally. Non-ideal lighting conditions would result in faulty detection of the cue ball or other objects. To manage and mitigate this risk, we made very specific requests for the area we want for our project to be located in and modified our code to make sure that we are able to make on-the-fly parameter adjustments based on the given lighting conditions. No large changes were made to the existing design of the system; most of what we did this week was testing, verification, tweaking + small optimizations, cleanup. No change to the schedule is needed – we’re on track and proceeding as planned.

Unit Tests:

Cue Stick Detection System:

  • Cartesian-to-polar
  • Checking image similarity (pixel overlap)
  • Frame history queue insertion, deletion, dequeuing
  • Computing slope
  • Polar-to-cartesian point
  • Polar-to-cartesian line
  • Walls and pool table masking
  • Extracting point pairs (rectangle vertices) from boxPoints
  • Primary findCueStick function

Ball Detection System:

  • getCueBall function
  • Point within wall bounds
  • Green masking of pool table
  • Creating mask colors
  • getBallsHSV
  • Finding contours of balls after HSV mask applied
  • Remove balls within pockets
  • Point-to-line distance
  • Ball collision/adjacent to walls

Physics Engine/System:

  • Normal slope
  • Point-to-line distance
  • Checking if point out of pool table bounds
  • Line/trajectory intersection with wall
  • Reflected points, aim at wall
  • Finding intersection points of two lines
  • Extrapolate output line of trajectory
  • Point along trajectory line (or within some distance d of it)
  • Find new collision point on trajectory line
  • Intersection of trajectory line and ball
  • Main run_physics

System Tests:

Cue Stick Detection System:

  • Isolated stick, no balls
  • Stick close, next to cue ball
  • Stick among multiple balls
  • Random configurations (10x)
  • Full-length stick
  • Front of stick, at table edges (5x)
  • Different lighting conditions
  • IMU accelerometer request-response

Ball Detection System:

  • Random ball configurations (20x)
  • Similar colored balls close together (e.g. cue ball + yellow stripe + yellow ball)
  • Balls near pockets
  • Balls near walls
  • Different lighting conditions

Physics Engine/System:

  • Kiss-shot: cue ball – ball – ball – pocket (20x)
  • Bank shot: cue ball – ball – wall (20x)
  • Normal shot: cue-ball – ball – pocket (20x)

Webapp System:

  • Spin shot location request-response
  • End-to-end latency (processing each frame)

Tjun Jet’s Status Report for April 27, 2024

In the past week, I worked on designing the test suites for testing the ball prediction accuracy and shot calculation accuracy. I also helped to build the camera and projector to mount onto the shelf. Finally, I also worked on the presentation slides for our final presentation. 

To test out our testing and verification, we had to test the accuracy of our ball prediction algorithm using computer vision, and the shot calculation accuracy. To test both of them, I helped to design some test suites for both accuracies. Firstly, to test whether our ball prediction in computer vision was accurate, we designed a test suite to calculate balls separated, balls near the pockets, balls adjacent to each other, and balls near to the walls. We then projected the predicted balls onto the table, and measured the average distance (of all the balls) from the actual ball and the predicted ball. Our use case requirement was to ensure that the distance between these balls were under 0.2inches, and we managed to achieve an average of less than 0.05 inches. Here is a picture of our test suite: 

Similarly for shot calculation accuracy, performed two different tests. The first test was taking 20 shots of three different types – normal shots, bank shots, kiss shots, and we took the accuracy from those calculations. Here are what our test suites looked like: 

In order to ensure that these tests could be run, we spent a good portion of the week adjusting the camera and projector well onto the shelf that we bought. This ensured that the user could see the predicted trajectory on the pool table. Finally, I spent a good portion of the week working on the presentation slides, as well as my final presentation. 

Over the week, I also continued working on the web application that would aid for spin selections and velocity selections. This provided recommendations to users if they hit their actual spin selection and velocity selection.

I am currently on schedule to finish my tasks. From nowup till demo day, I plan to continue testing the different aforementioned test suites. Given that the accuracy of our bank shots were not that high, I want to try improving this accuracy and make sure that we get some good results before demo day. I also hope to finish the web application, and hopefully work on spin and velocity calculations to show on demo day.

Debrina’s Status Report for April 27, 2024

This week I am on schedule. There wasn’t much planned on our schedule for this week, so most of the work I did involved finalizing our product to prepare for demo day.

In the beginning of the week (on Sunday), I spent a decent amount of time conducting some more tests to collect more data regarding our product’s performance in fulfilling its use case requirements. I also spent a while finalizing the final presentation, where I created slides to present this data as well as our testing procedures. Besides working on the final presentation, I spent this week conducting some more tests to continue to evaluate the accuracy of our pool table with the goal of identifying areas where further improvements could be made. In parallel with conducting tests, I also continued to work on some minor bug fixes and feature improvements that would make the user experience smoother during demo day. 

In the coming week, I plan to continue finalizing details in our system and solving any more bugs that are found. I also plan to create a finalized position for the projector in order to easily calibrate its projection to the tabletop on demo day. I plan to conduct some tests with different lighting conditions to identify the parameters that will need to be modified during demo day in order to ensure our system remains accurate. This week, we also have the final poster and final video due. I plan to dedicate a significant portion of time to plan out the content of and complete these deliverables.

Andrew’s Status Report for April 27, 2024

This week I was mostly wrapping up for the project. I did work on the spin physics again, refining and writing additional unit tests to verify the functionality. I helped debug some of the frontend CSS issues we were also facing (resizing issues, running on different laptops). Additionally, I worked on restructuring some of the system’s calibration to be more robust in handling errors and being more user-friendly to debug. I also did various end-to-end tests to verify the accuracy of the system & trajectory prediction. This was in-part for the final presentation as well as additional verification that we were able to meet our use case requirements. Our progress in on schedule, and in the coming week, I will be working on the final demo as well as the poster to present. I also plan on doing one last pass at the cue stick detection and squeezing as much stability and precision out as possible before the final demo.

Tjun Jet’s Status Report for April 20, 2024

In the past two weeks, I worked on creating an application for the users to select their ideal spin and velocity of the  ball, and also worked together with Andrew to improve the cue stick detection and physics for spin collision. 

In order to make our project more interactive, we decided to add in a web application for users to select where they intend to hit the ball (to provide for spin), and their intended strength of the ball. After the user has indicated their preferences, the user will execute their shot. Upon executing the shot our system will provide recommendations on whether the user should hit it harder, softer, or whether their shot was good. Based on the final location of the ball, we will also provide recommendations on the spin that the user provided. 

This week, I contributed to the front end application to allow the users to select the spin that they want. The front end application will also display a video of the current predictions. Here is a picture of what the application looks like: 

I also spent a good portion of time understanding the physics equations that were behind cue ball spin. As we weren’t able to call a physics engine directly in our implementation, we used the physics equations that were associated directly with the engine. The engine we used was PoolTool where we referenced these equations. Understanding these equations helped us to implement the ball spin.

Here are what some of the equations looked like:

Two things that I had to really learn and pick up in order to accomplish the task of working in a team project such as in capstone, was Github and Reading Documentation. Firstly, Github was an extremely important tool that helped us for version control. Initially, we realized that it was difficult for us to work on the same codebase but merging everything together at the same time. We definitely ran into a lot of issues like Git version conflicts, us not having well coordinated commits. As we went along, we got better and understood how github worked and we eventually improved overtime. This was important in the efficiency of accomplishing these tasks. 

The second thing that I found really useful is to read documentation quickly and sieve out important information. This was very important when we were exposed to new material like using cv2 functions and other library functions. Along with github, these are things that are not explicitly taught in a class, but I feel are very important and necessary knowledge to have as ECE engineers. Apart from the technical knowledge gained from capstone, these were definitely two skills I valued most from this experience.

Given that we have fully implemented our physics model before spin physics, we are currently on track for whatever we want to accomplish for this week. We have analyzed the different equations for spin physics and are almost complete with implementing the physics, which we will test out tomorrow. We are a little bit behind on the trajectory verification, and that is the last bit of testing and verification that we will have to do in our meeting tomorrow.

 

Andrew’s Status Report for April 20, 2024

This week consisted of a lot of optimization, improvements, and cleaning up on my end. This primarily had to do with: 1) cue stick detection, 2) incorporating spin into our physics engine and displaying it. For cue stick detection, we realized that the stick trajectory was not very stable, and I tried many, many different ways of improving the cue stick detection. Ultimately, what worked the best was a combination of color masking, Gaussian Blur, contour detection, and enclosing rectangle + math. The cue stick detection is now significantly more stable and accurate compared to before; this was huge for our system, as the cue stick detection is a crucial part. If the stick detection is off, then the usefulness of the system decreases significantly. The second part Tjun Jet and I worked on was incorporating spin into our physics engine. Specifically, we took a deep dive on the physics of pool ball spin. We incorporated this into our physics engine when we case on both ball-wall and ball-ball collisions. Further, we also take in information about the user’s strike (location of strike + speed) and feed it into our physics engine. Our physics engine uses this input to modify the predicted trajectory in real-time. By having this web application interface link directly to the physics engine, the user is able to see in real-time how spin will affect the ball’s trajectory.

Our progress is on-schedule. In the coming week, we will be looking to finish our Final Presentation, present it, and make some last-minute touch-ups to our project. On a very practical level, I got a very hands-on introduction to computer vision applied to a specific problem. More on the theoretical side, I also had to refresh myself with physics and take a deep dive into the physics of pool. I knew almost nothing about computer vision coming into this project, and I didn’t have enough time to fully understand the theory behind everything by reading textbooks or taking a course. Instead, I found research papers, projects, videos, etc that had some parts overlapping with ours (or what we wanted to do), and I consumed that content. This is the learning strategy I took on to acquire this new knowledge, and I realized how important it is to be able to limit the scope of what you are learning in order to accomplish the tasks at hand. If I resorted to a traditional textbook or course, it would not be possible to finish our system in time. Much of the learning I did was on the fly, hands on.

Team Status Report for April 20, 2024

This week, our team is on schedule. We have not made any modifications to our system’s existing design; however, we did make some improvements to our testing plans. The main focus of this week was improving cue stick detection, implementing the spin feature and conducting tests and validation of our system. 

This week, we made more improvements on the cue stick detection model, which we determined was a limiting factor in our system that caused some inaccuracies in our trajectory outputs. We were able to make it much more accurate with our new implementation than the previous method we applied. This week we also added to our web application to create a user interface where the user can select a location on the cue ball that they would like to hit. We then send this location to our physics model, which will calculate the trajectory of the cue ball based on the amount of spin it will have upon impact. We are basing our spin model off of an online physics reference and are currently working on implementing these equations. This is an effort that we will intend to complete in the following week. We will also conduct testing on this feature at the end of the coming week. 

In terms of testing and validation, we utilized the following procedures to conduct our tests. We are still in the process of finishing up our testing to gather more data, which will be presented in the final presentation on April 22nd. 

Trajectory Accuracy

  1. Aim cue ball to hit the target ball at the wall. 
  2. Use CV to detect when the target ball has hit the wall. 
  3. Take note of the target ball’s coordinate when it hits the wall. 
  4. Compare the difference between this coordinate and the predicted coordinate of the wall collision. 

Ball Detection Accuracy

  1. Position balls on various locations on the table, with special focus on target balls located on edges of walls, near pockets, and balls placed right next to each other.
  2. Apply the ball detection model and measure the distance between the actual ball’s center and the center of the perceived ball. 

Latency

Time our code from when we first receive a new frame from the camera to the time it finishes calculating a predicted trajectory. We are timing our code for different possible cases of trajectories (ball to ball, ball to wall, ball to pocket) since the different cases may have different computation times. 

The most significant risk we face now is the alignment of the projector projection to the table. We noticed that there may be a horizontal distortion from the projector that may not project the detected balls in the correct location on the table. While this would not affect the calculations of our accuracy in our backend system, it may not give the most accurate depiction to the user. 

In the following week, we will be coming up with better ways to calibrate the projector to yield more consistent projection alignments. Furthermore, we will continue with our testing and validation efforts and potentially improve our system if there are any specifications that can be improved. As mentioned earlier, we will also continue to implement and improve the spin feature and conduct tests on this feature.

Debrina’s Status Report for April 20, 2024

This week I am on schedule. On the project management side, this week, we’ve spent a decent amount of time planning the final presentation that is scheduled to take place on April 22nd. Furthermore, we’ve worked on making our system more robust and prepared a demo video to present in the final presentation. 

In terms of progress related to our system, I continued running tests on our system to continue identifying and resolving issues that would cause our system to crash on some edge cases. On the structural side, this week I also installed lighting to our environment and created a permanent mount for our projector and camera, but left it possible to adjust their vertical position so that we can still adjust the camera and projector as we debug. A big issue this week was calibration of the projector. Our original plan was to manually align the projector to the table by modifying the projector’s zoom ratio and position after the table image is made full screen on our laptop screen. However, there are some limitations to this since the projector’s settings are not as flexible as we had hoped. Hence, another solution we tried out this week to speed up the calibration process is to stream the video from our camera on flask, where we would have more freedom in adjusting the zoom ratio and position of the video feed and hence be able to better align it with the table. This is still an ongoing effort that I will continue to work on in the coming week.

Another big focus for this week was on making further improvements to the ball detections. In order to have more stable ball detections, I implemented a state detection algorithm that would detect whether the positions of the balls on the table have shifted, in which case the ball detections would be re-run and a new set of ball locations would be passed into the physics model for computations. Currently, the state changes are based on the position of the cue ball. Hence, if a different ball moves, but it isn’t the cue ball, the ball detections would not be updated. This is a difficult issue to solve, however, as it would be computationally intensive to match the different balls across different ball detections. I will be working on an improvement for this limitation in the coming week to make the algorithm more robust. 

Besides the projector calibration and improving the ball state detection, in the coming week I also plan to continue conducting tests and fix any bugs that may still remain in the backend.

 

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

On the technical side, I learned a lot about computer vision, Python development, and object oriented programming in Python. I tried out a lot of different tools from the OpenCV library in order to implement the object detections since a lot of the times they yielded inaccurate results. I had to research different methods then implement them and compare their results with each other to determine which ones would be more accurate and under what lighting / image conditions they would be more accurate. Sometimes the meaning of the different parameters used in the different methods were not very clear, and in order to learn more about these I would either experiment with different parameters and inspect the changes they made, or I would consult blogs that discussed a sample implementation of the tool in the author’s own project. In terms of Python development, I learned a lot about API design and object oriented programming. Each detection model that I implemented was a class that contained certain parameters that could be used to keep track of historical data and be used to return detections based on these historical data. I also tried to standardize the APIs to ease our integration efforts. Furthermore, since our system would be a long-running process, I focused on implementing algorithms with lower complexity in order to allow our backend to run faster. All the learning that I had done was mainly done through trial and error, experimenting with the tools and seeing the behavior of the implementation, and reading documentation and implementation examples to fix bugs or modify the necessary parameters used by the different tools.

Andrew’s Status Report for April 06, 2024

This week I modified our cue stick subsystem to use AprilTags. This turned out to affect the accuracy both positively and negatively. It provided a positive effect in that the cue stick detection itself was more accurate and much more consistent. However, the negative part was that the large AprilTags made our other computer vision subsystems behave unexpectedly. The most detrimental was that our cue ball detection subsystem started to mistake the AprilTags themselves occasionally as cue balls. Additionally, in order for the cue stick to be detected, this now required both AprilTags to be within frame. However, for some edge-case shots near the pool table walls, this would not work. As such, we decided to revert back to the previous polygon approximation method for now, and I am working on relying more on color detection for the cue stick detection. The idea is to use some sort of very bright, unusual color as an indicator for the cue stick subsystem to pick up on (something like bright pink).

Since our schedule is currently focused on integration, I am not too much behind schedule. I caught a pretty bad case of food poisoning Tuesday night and could not do much work until the weekend. However, I’m using the time now to catch up as well as work on newer tasks. In the coming week, I will be looking into integrating ball spin from our web application input into our physics engine + backend.

For verification, the most important component/subsystem I need to verify and improve is the cue stick detection system. The tests I’m planning to run for this subsystem is fairly straightforward: they consist of various frames/video taking various shots as well as lining up a variety of different shots. Verifying the subsystem is not difficult – I look frame-by-frame, output the detected cue stick, and verify it visually. To verify these results, I also added unit tests to the subsystem verifying smaller functional components required to make the entire subsystem operate successfully. This was also repeated for parts of the calibration subsystem as well.