Debrina’s Status Report for March 23, 2024

This week I am on schedule, but there are some issues we faced in integration this week that may slow down progress in the coming week. This week our team focused on integration to begin testing the full system – from object detection to physics calculation and trajectory projection. Our testing helped us classify some bugs in our implementation caused by certain cases. These bugs led to some unexpected interruptions and sequences in our backend system. Hence, this week I helped to work on the physics model to add more functionality and to improve some existing implementations to account for edge cases. 

In the physics model, I implemented a method to classify the wall that a trajectory line collides with. Furthermore, I fixed some bugs related to division by zero, which would occur whenever a trajectory line was vertical. This case would happen if a user points the cue stick directly upwards or downwards. Next, I tried to fix some bugs related to the wall reflection algorithm since I noticed that it only worked as we expected on specific orientations of the pool cue. I began implementing a few more features to this algorithm so that it would account for walls that are not purely horizontal or vertical. I also modified the return value of this function to return a new trajectory (after the wall collision) instead of a point. This is crucial as this new trajectory is required for the next iteration of our physics calculations. With this new implementation, there are still a few bugs that cause the function to encounter errors when a collision occurs with the top or bottom walls. I will be consulting my teammates to address these errors during our team meeting tomorrow (Sunday, March 24th). 

The edge cases that we discovered during our preliminary testing led me to realize that we could introduce more standardizations for our backend model. I added a few more objects and specifications to the API that Tjun Jet had designed in prior weeks. In particular, I added a new object called ‘TrajectoryLine’, which will allow us to explicitly differentiate between the starting point and the end point of a trajectory. This is helpful to use in some components of our physics model where the direction of the trajectory is crucial. 

In the coming week, I will work on modifying parts of our backend that need to be updated to meet the new standardizations. I will continue to help work on the physics model implementation and conduct testing on trajectory outputs to ensure that we have covered all possible cases of orientations of the cue stick and locations of the pool balls. With regards to our project’s object detection model, I hope to improve the ball detections by using different color masks to better filter out noise that may be caused by different lighting conditions.

Image from our one of our test cases. Trajectory of cue ball as it reflects with the left wall. The cue stick is represented as two red points on the canvas.

Team Status Report for March 16, 2024

This week we worked on finalizing our backend code for object detection and physics. We were able to finish our physics model, and can now begin testing its accuracy. This week, we also cleaned up our backend code to ease the integration phase in coming weeks. We created documentation for our APIs and refactored our code to align with our updated API design standards. We also took a few steps towards integrating our backend by combining the physics model and object detection model. In terms of the structure of our system, we were able to mount the projector and camera, and we will make our current setup more sturdy in the following week. With these components in place, we are on track to begin testing our integrated design on a live video of our pool table this coming week.Aside from making progress on our system, our group spent a decent amount of time preparing for the ethics discussion scheduled for Monday, March 18th. We each completed the individual components and plan to discuss parts 3 and 4 of the assignment tomorrow (Sunday, March 17th). 

This week was a busy week for our team, so there were some tasks that we were unable to complete. Although we planned to create our web application this week, we will now push this task to span the next two weeks, and we will work on this in parallel with our testing on different subsystems. We don’t foresee this delay to pose significant risks as the main component of our project (the trajectory predictions) do not depend on the web application.

We came up with some additional features to implement in our system, and after discussing this in our faculty meeting, we have a better idea of the design requirements. We plan to implement a learning system to give the users recommendations based on their past shots taken. This will require us to detect the point on the cue ball the user hits, as well as the outcome of the shot—whether or not the target ball fell into the pocket. This will require us to implement some additional features in our system, and we plan to complete these in the coming week. 

The most significant risk we face now is the potential lack of accuracy of our detection algorithms. As we begin conducting testing and verification, it is possible that there are inaccuracies in our physics model or object detection model that cause our projected trajectory to be inaccurate. In this case, we would need to detect which components are causing the faults and devise strategies to solve these errors. We do have plans in place to mitigate these risks. If there are issues with the accuracy of our object detection models, we may use more advanced computer vision techniques to refine our images. In particular, we may implement OpenCV’s background subtractor tools to better distinguish between the game objects and the pool table’s surface. 

Testing the projection of the trajectory on our pool table
Cue ball detection (cue ball is circled in pink)
Our updated schedule. The testing for CV and physics and the web application tasks have been pushed up to week 9. (note that some of the CV tasks that have been completed have been omitted from the schedule)

Tjun Jet’s Status Report for March 16, 2024

This week, I focused on the ethics assignment, reformatting the entire physics model to fit our new code format, and started integrating our entire code pipeline. Most of the week was spent revamping our code, making sure that we all conformed to our specific code format. Although this seemed like busy work, it was a huge step for us as it will go a long way in ensuring a more seamless integration process when we combine our codes together. 

In my previous team status report, I talked about devising a software API framework for us to conform to. It turned out to be a pretty good framework for us over the past week, as we moved all our current implementations to follow this framework. The biggest learning point for me is that if we had done this earlier, we would’ve saved a lot of time and trouble revamping our entire codebase. This is an extremely important takeaway for me as I go on to become a full-fledged engineer in the future. With that said, a good portion of the week was spent moving over my originally implemented subsystem to follow our new format. I have tested the framework with a single image, and it is working fine.

This also led to my first steps toward integrating our code. Using the output from Debrina’s Computer Vision model, I was able to parse the images from the model and put it through the physics model, which would return an output_line to project. Tomorrow, our team will meet together to try to integrate this on a real video feed. We will most likely face a few issues in alignment and integration, but we will try our best to resolve any issues that arise tomorrow. 

Another big part of this week’s progress was the ethics assignment. I did not expect to spend so much time on this assignment, but I ended up spending around 5-6 hours on this ethics assignment. I particularly found doing research on global ethical issues and how technology intertwined with politics pretty interesting. Furthermore, it was also pretty difficult to think of ethical issues that could arise with our eight-ball pool project, thus, it also took a lot of time really trying to reflect and understand our project on a deeper level in order to consider the global, cultural, and social implications that our project could encompass. As we write code to make our project successful, this assignment was quintessential in ensuring that we do not forget the importance of engineering ethics in any project. When we do our meeting tomorrow, our team will begin to discuss Step 3 of the ethics assignment as well.  

We are a few days behind schedule for whatever we wanted to accomplish. Initially, we wanted to start the integration efforts on friday. However, we felt that we were not ready to integrate our assigned parts. Thus, we decided to just convert our Friday meeting to a work session, and push back the integration process to tomorrow instead. Our team members feel more ready to continue with the integration tomorrow, and thus, we are excited to get our first taste of integrating all our subsystems together tomorrow.

Debrina’s Status Report for March 16, 2024

This week I was able to stay on schedule. I created documentation for the object detection models. I updated some of the return and input parameters of the existing object detection models to better follow our new API guidelines. Furthermore, I created a new implementation for detecting the cue ball and distinguishing it from the other balls. Previously, cue ball detection depended on the cue stick’s position, but we decided to change it to be dependent on the colors of the balls. To do this, I filtered the image to emphasize the white colors in the frame, then performed a Hough transform to detect circles in the image. Since this method of detecting the cue ball worked quite well on our test images, in the coming week I plan to try implementing a similar method to detect the other balls on the table in the hopes of being able to classify the number and color of each ball.

In terms of the physical structure of our project, I worked with my team to set up a mount for the projector and camera and perform some calibration to ensure that the projection could span the entire surface of our pool table setup. Aside from the technical components of our project, I spent a decent amount of time working on the individual ethics assignment that was due this week. 

In the next week I hope to continue doing some integration among the backend subsystems and conduct testing. It is likely that the object detection models will need recalibration and potential modifications to better accommodate the image conditions of our setup, so I will spend time in the coming week working on this. Furthermore, I plan to create an automated calibration step in our backend. This calibration step would detect the bounds of the walls of our pool table setup, the locations of the pockets, and tweak parameters based on the lighting conditions. This calibration function would allow us to more efficiently recalibrate our system when we start it up in order to yield more accurate detections.

Cue ball detection (cue ball is circled in pink)

Andrew’s Status Report for March 16, 2024

This week I wasn’t able to get much done.  I was supposed to extend the web application by building a user-friendly display for the accelerometer, gyroscope data + integrating the camera feed into the web application. I had three midterms + other homework due this week, so I wasn’t able to get much done. I made some progress on the display but I was not able to finish that nor the camera feed integrated to the web application. I’m aiming to use the entirety of this Sunday (tomorrow) to finish the work, and our team plans on fully integrating everything tomorrow as well.

Something to note, however, is that after discussions with Professor Kim, we realized that the raw gyroscope and accelerometer data would not be very accurate to display. We thought of doing some sort of meter or “relative” display in order to use this data, but we need to pivot to a better way of utilizing the data to display user recommendations. For now, we were advised not to spend too much time on getting the IMU to work precisely.

For plans in the coming week, we hope to be able to integrate everything together and demo a working version of our project. This will probably involve a lot of debugging and potential modifications to the pool table/metal frame in order to get the computer vision components working as best as possible.

Tjun Jet’s Status Report for March 9, 2024

This week, I created an API for the team’s code, and worked on various parts of the design report., and continued working on the physics model. Most of the first week was spent writing the design report, which I will go through in detail in the next few sections of the report. My group members and I didn’t do anything over break, as planned in our schedule. 

I mainly focused on two areas of the report – Architecture and Principles of Operation, and Project Management. I spent a decent amount of time on the Architecture and Principles of Operation, and it was crucial for us to present both our hardware and software designs. Thus, I spent quite a bit of time illustrating the system block diagram, mechanical structure of our pool table, and placement of items on our cue stick. Images of those are shown below. 


Furthermore, as our codebase gets larger, our code is starting to get a bit convoluted and messy. This might eventually lead to problems in the future when we are trying to integrate our code. Thus, I spent a good amount of time this week brainstorming a software API framework for our team to conform to. Our API framework follows a modal-view-controller architecture. Our high-level model keeps track of items that are constantly changing, such as an array of ball detections, the coordinates of the cue, coordinates of the cue ball, and the IMU information. Each of us will program various subsystems, but most importantly, we are constantly receiving and updating information from our model. The drawing of the predicted trajectory will be “view” functions which do not update the state of the model, and are only used for drawing. 

I am currently on schedule for whatever I wanted to accomplish. The physics model’s implementation has been completed, and we are looking to integrate all our components by this week. We will do a full live-video feed process by the end of this week, which makes us slightly ahead of schedule. Then, we will start integrating the IMU data and possibly look into more things like spin detection. But before we go into that, I believe we will spend a lot of time this week calibrating and tweaking the positions of the cameras, projectors, and the pool table to make sure everything looks aligned and meet a portion of our verification metrics.

Debrina’s Status Report for March 9, 2024

This week (the week prior to Spring break) I was able to stay on schedule. I was able to conduct tests of the object detection models we have implemented on a live video feed of the pool table. Although we have not yet set up the permanent camera position on our pool table setup, the video feed on which the object detection model was run was able to detect the balls and pockets of the pool table quite accurately. I did some tweaking of the thresholds used in our models (such as the thresholds used in Canny edge detection and Hough line transforms) in order to make our models more accurate for the specific lighting conditions of our setup. The main focus for this week was completing the Design Review Report. I wrote the Introduction, Use Case Requirements, Design Requirements, Related Work, and Summary. 

Initially, I had planned to implement the classification model for solid and striped balls this week. However, I decided to postpone it in order to focus on conducting tests and improving the models on a live video feed. Hence, in the coming week I will be working on the solids and stripes detection. Furthermore, I hope to set up a permanent mount and position for the camera. This would allow me to also finalize the wall detection models. Lastly, I will work on updating the APIs of the object detection models based on the new API designs that we had made this week (we had revamped the API design of our system to make integration more seamless).

Testing the ball detection model on a live video feed (prior to calibration)

Team Status Report for March 09, 2024

This week we were on schedule for our goals. Work was done for the web application as well as refining the cue stick system. Building out the web application involved a frontend (React) and backend (Flask/Sqlite), though we are considering phasing out the backend if WiFi speed is fast enough for us to go directly from ESP32 to frontend. Having the Arduino Nano was a convenient way to test the software with a wired connection; we phased out the Nano and are just using the ESP32 module which acts as a server and exposes API endpoints to get the gyroscope and accelerometer data for the IMU. We also calibrated our projector, tested the camera, and adjusted the height of various components. The projector needed to be high enough to display over the entire pool table, and the camera needed to be high enough for each frame to fit almost exactly into its FOV. After testing, we decided that the projector and camera needed to be much higher, which involves shifting the actual pool table down. For our structure, this is extremely convenient since the metal shelf frame has notches. We shifted down the pool table to lower notches in order to increase the vertical distance between pool table and camera/projector. In addition to this, we spent a good chunk of time this week writing up the Design Review report. This was a significant effort, as the report spanned 11 pages; we split up the report evenly. Debrina wrote the introduction, use case requirements, and design requirements; Andrew wrote the abstract, design trade, testing & verification, and part of the risk mitigation; Tjun Jet wrote the rest. This was a good checkpoint for us as well, since it forced us to document with detail all the work we’ve finished so far and assess next steps.

A risk that could jeopardize the project is the detection of the cue stick. Detecting the stick with zero machine learning proves to be a hard task, and the results are not always consistent just by using contour, color detection etc. The detection can be further improved with more layers and sophisticated heuristics. However, in the event that we still cannot detect the cue stick with high accuracy, we will opt to use a more reliable solution like AprilTags. The infrastructure for AprilTag detection is already built into our system. In the worst case we could attach AprilTags to the cue stick in order to have our camera detect it easily. This is the main contingent plan for this problem.

Part A (written by Andrew):

The production solution we are designing will meet the need for an intuitive pool training system. Taking into account global factors, pool itself is a global game played by people all over the world, and it has its origins dating back to the 1300s. As such, we need a system that is culture and language-agnostic. Namely, this system should work for someone in the United States as well as someone in France, Italy, etc. This requires our product to use minimal country-specific information/context – a concern we should think carefully about. Since we are used to living in the United States, we must think critically about these implicit biases. For instance, we planned our product to heavily rely on visual feedback instead of text. The trajectory prediction in particular is country/culture/language agnostic and so will meet world-wide contexts and factors. Additionally, we built the system as simple as possible, and with minimal effort on the part of the user. This would account for variations in different forms of pool. This also aids players who are not as technologically savvy. If we had made the primary method of feedback something more complicated or convoluted, then it would be a detriment to those who are not technology savvy.

Part B (written by Debrina):

The production solution we are designing meets the need for a pool training system that has an intuitive user interface. The consideration of cultural factors is similar to that of global factors. Our training system makes no assumptions on which languages are spoken by our users. Nor does it make any assumptions regarding the level of language proficiency our users have. The feedback provided by our pool training system is purely visual, which makes its interface easily understood by users of all cultures and backgrounds. Our product solution also has the potential to spread the accessibility and popularity of billiards in cultures where the game is not as widespread. Currently, the game of billiards is more popular in some countries and cultures than others (the United States, United Kingdom, and Philippines, to name a few). Our product solution’s intuitive user interface would be able to promote the game of billiards in other cultures. The versatility of our product solution provides people of all cultures an opportunity to learn to play billiards. 

Part C (written by Tjun Jet):

CueTips considers various environmental factors in the selection of our material, power consumption, and the modularity and reusability of our product. When selecting the material to use to build the frame for our pool table, we not only considered the biodegradability of the material used, but also the lifespan of the material. We bought a frame that consisted of wooden boards to hold the pool table, and metal frames as the stand. We initially considered using plywood planks to build our own frames, but we decided against it when we realized that prototyping and building the frame over and over again could lead to a lot of material wastage. Furthermore, our frame is modular and reusable, meaning it is easy to take apart and rebuild. For instance, if a consumer is moving the location of the pool table, it will be easy for them to take it apart and build it in another area, without the need for large transportation costs. To ensure appropriate lighting for the camera detection, we also lined the frame with LED Neopixels. LED neopixels are generally energy-efficient compared to traditional lighting. When used efficiently, this minimizes unnecessary power consumption. Thus, by carefully selecting biodegradable material, choosing energy efficient lighting, and making our entire product easily transportable, modular, and reusable, CueTips aims to provide a pleasant yet environmentally friendly pool learning experience.

Andrew’s Status Report for March 09, 2024

This week consisted of: refining the cue stick system, creating the local application, and testing the projector output system. When I began building the cue stick system, I realized that the Arduino Nano was redundant since the ESP32 acts as a microcontroller just augmented with WiFi/Bluetooth. The first version of this module utilized only the Arduino Nano and was wired. Because this was a lot easier to set up, I built the V1 of our cue stick system with just the Arduino Nano to test functionality. Once I verified it was working, I replaced the Arduino Nano with the ESP32 module. It works by acting as a server accessible through WiFi on a specified localhost port. When endpoints “/accel”, “/gyro” are queried, I construct an HTTP request containing accelerometer and gyroscope data respectively.

This leads into the local application. I built a first version of the webapp using React/Flask. The Flask backend acts as a sort of lightweight storage for the webapp. It routinely polls the ESP32 module fetching recent gyroscope and accelerometer data. It stores it in a local SQLite database and returns paginated data to the React frontend when requested. I initially created this Flask backend because I didn’t think WiFi speeds were fast enough to accommodate how much data we were going to receive per few milliseconds/seconds. However, just by running back-of-the-envelope calculations on typical WiFi router speeds, it might be possible to do away with the entire backend and poll the ESP32 module directly actually. Though an argument to be made for keeping the backend is to eventually integrate it with the physics engine, in which we would need storage or more complex behavior. I also tested the projection system and had to modify the coloring scheme. It turns out the projector is too weak in well-lit environments, so you have to use only black and white in the projection to have enough contrast to be clear and visible.

My progress this week is on-schedule. For next week, I hope to flesh out the webapp more by building an intuitive, user-friendly display for the accelerometer, gyroscope data for the web application. I’m also integrating the camera feed to the web application.

Team Status Report for February 24, 2024

This week we mostly stayed on schedule for our goals. All of our parts have arrived this week, and thus, we have started mainly working on the building of the pool table structure. The main items that we are working on are the IMU BNO055, the ESP32 WiFi module, and the shelf for the pool table. So far, we have not made any changes to the design of our system, and our schedule remains the same. 

This week’s focus was refining the software of our project. Given that we managed to accomplish most of the items last week, this week we mainly focused on improving accuracy of those software items, as well as playing around with our new items that just arrived. Furthermore, we also built our shelf with the pool table, which was a significant step for our project. 

All three of us managed to successfully build the shelf and combined the pool table together with that shelf. Andrew focused on figuring out the outputs to the projector, as well as got the integration with the BNO055 IMU done. Debrina focused on improving the accuracy of the computer vision detection models, and tried HoughCircles this week. Tjun Jet worked on successfully implementing the physics models, which he has successfully implemented and awaiting accurate detections to do a full testing. In the coming week, we hope to start mounting our cameras and projector onto the shelf, and thus, do a full loop testing from live video detection to actually outputting the detected trajectory onto the table. 

Currently, the most significant risk is that our projector is not as high powered as we thought. When we tried to do the projection from the top, we realized that it is not very bright, even from the maximum power. Furthermore, to make it fit the pool table just right, we realized we have to elevate the projector, which may make it dimmer. Thus, we will look to adjust the lighting conditions such that this does not get worse. Furthermore, we realized that reflections in the balls make it harder for edge detection to be successful. Thus, we will probably have to invest in some sort of lighting and neopixels to ensure no shadows and reflections on the balls as much as possible, to yield the most accurate detection. 

Here are some images of our progress for this week:

Structural setup of our playing field
Diagram of our ball collision model
Example output from our pocket detection model