Debrina’s Status Report for April 20, 2024

This week I am on schedule. On the project management side, this week, we’ve spent a decent amount of time planning the final presentation that is scheduled to take place on April 22nd. Furthermore, we’ve worked on making our system more robust and prepared a demo video to present in the final presentation. 

In terms of progress related to our system, I continued running tests on our system to continue identifying and resolving issues that would cause our system to crash on some edge cases. On the structural side, this week I also installed lighting to our environment and created a permanent mount for our projector and camera, but left it possible to adjust their vertical position so that we can still adjust the camera and projector as we debug. A big issue this week was calibration of the projector. Our original plan was to manually align the projector to the table by modifying the projector’s zoom ratio and position after the table image is made full screen on our laptop screen. However, there are some limitations to this since the projector’s settings are not as flexible as we had hoped. Hence, another solution we tried out this week to speed up the calibration process is to stream the video from our camera on flask, where we would have more freedom in adjusting the zoom ratio and position of the video feed and hence be able to better align it with the table. This is still an ongoing effort that I will continue to work on in the coming week.

Another big focus for this week was on making further improvements to the ball detections. In order to have more stable ball detections, I implemented a state detection algorithm that would detect whether the positions of the balls on the table have shifted, in which case the ball detections would be re-run and a new set of ball locations would be passed into the physics model for computations. Currently, the state changes are based on the position of the cue ball. Hence, if a different ball moves, but it isn’t the cue ball, the ball detections would not be updated. This is a difficult issue to solve, however, as it would be computationally intensive to match the different balls across different ball detections. I will be working on an improvement for this limitation in the coming week to make the algorithm more robust. 

Besides the projector calibration and improving the ball state detection, in the coming week I also plan to continue conducting tests and fix any bugs that may still remain in the backend.

 

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

On the technical side, I learned a lot about computer vision, Python development, and object oriented programming in Python. I tried out a lot of different tools from the OpenCV library in order to implement the object detections since a lot of the times they yielded inaccurate results. I had to research different methods then implement them and compare their results with each other to determine which ones would be more accurate and under what lighting / image conditions they would be more accurate. Sometimes the meaning of the different parameters used in the different methods were not very clear, and in order to learn more about these I would either experiment with different parameters and inspect the changes they made, or I would consult blogs that discussed a sample implementation of the tool in the author’s own project. In terms of Python development, I learned a lot about API design and object oriented programming. Each detection model that I implemented was a class that contained certain parameters that could be used to keep track of historical data and be used to return detections based on these historical data. I also tried to standardize the APIs to ease our integration efforts. Furthermore, since our system would be a long-running process, I focused on implementing algorithms with lower complexity in order to allow our backend to run faster. All the learning that I had done was mainly done through trial and error, experimenting with the tools and seeing the behavior of the implementation, and reading documentation and implementation examples to fix bugs or modify the necessary parameters used by the different tools.

Leave a Reply

Your email address will not be published. Required fields are marked *