Tjun Jet’s Status Report for April 27, 2024

In the past week, I worked on designing the test suites for testing the ball prediction accuracy and shot calculation accuracy. I also helped to build the camera and projector to mount onto the shelf. Finally, I also worked on the presentation slides for our final presentation. 

To test out our testing and verification, we had to test the accuracy of our ball prediction algorithm using computer vision, and the shot calculation accuracy. To test both of them, I helped to design some test suites for both accuracies. Firstly, to test whether our ball prediction in computer vision was accurate, we designed a test suite to calculate balls separated, balls near the pockets, balls adjacent to each other, and balls near to the walls. We then projected the predicted balls onto the table, and measured the average distance (of all the balls) from the actual ball and the predicted ball. Our use case requirement was to ensure that the distance between these balls were under 0.2inches, and we managed to achieve an average of less than 0.05 inches. Here is a picture of our test suite: 

Similarly for shot calculation accuracy, performed two different tests. The first test was taking 20 shots of three different types – normal shots, bank shots, kiss shots, and we took the accuracy from those calculations. Here are what our test suites looked like: 

In order to ensure that these tests could be run, we spent a good portion of the week adjusting the camera and projector well onto the shelf that we bought. This ensured that the user could see the predicted trajectory on the pool table. Finally, I spent a good portion of the week working on the presentation slides, as well as my final presentation. 

Over the week, I also continued working on the web application that would aid for spin selections and velocity selections. This provided recommendations to users if they hit their actual spin selection and velocity selection.

I am currently on schedule to finish my tasks. From nowup till demo day, I plan to continue testing the different aforementioned test suites. Given that the accuracy of our bank shots were not that high, I want to try improving this accuracy and make sure that we get some good results before demo day. I also hope to finish the web application, and hopefully work on spin and velocity calculations to show on demo day.

Tjun Jet’s Status Report for April 20, 2024

In the past two weeks, I worked on creating an application for the users to select their ideal spin and velocity of the  ball, and also worked together with Andrew to improve the cue stick detection and physics for spin collision. 

In order to make our project more interactive, we decided to add in a web application for users to select where they intend to hit the ball (to provide for spin), and their intended strength of the ball. After the user has indicated their preferences, the user will execute their shot. Upon executing the shot our system will provide recommendations on whether the user should hit it harder, softer, or whether their shot was good. Based on the final location of the ball, we will also provide recommendations on the spin that the user provided. 

This week, I contributed to the front end application to allow the users to select the spin that they want. The front end application will also display a video of the current predictions. Here is a picture of what the application looks like: 

I also spent a good portion of time understanding the physics equations that were behind cue ball spin. As we weren’t able to call a physics engine directly in our implementation, we used the physics equations that were associated directly with the engine. The engine we used was PoolTool where we referenced these equations. Understanding these equations helped us to implement the ball spin.

Here are what some of the equations looked like:

Two things that I had to really learn and pick up in order to accomplish the task of working in a team project such as in capstone, was Github and Reading Documentation. Firstly, Github was an extremely important tool that helped us for version control. Initially, we realized that it was difficult for us to work on the same codebase but merging everything together at the same time. We definitely ran into a lot of issues like Git version conflicts, us not having well coordinated commits. As we went along, we got better and understood how github worked and we eventually improved overtime. This was important in the efficiency of accomplishing these tasks. 

The second thing that I found really useful is to read documentation quickly and sieve out important information. This was very important when we were exposed to new material like using cv2 functions and other library functions. Along with github, these are things that are not explicitly taught in a class, but I feel are very important and necessary knowledge to have as ECE engineers. Apart from the technical knowledge gained from capstone, these were definitely two skills I valued most from this experience.

Given that we have fully implemented our physics model before spin physics, we are currently on track for whatever we want to accomplish for this week. We have analyzed the different equations for spin physics and are almost complete with implementing the physics, which we will test out tomorrow. We are a little bit behind on the trajectory verification, and that is the last bit of testing and verification that we will have to do in our meeting tomorrow.

 

Tjun Jet’s Status Report for April 6, 2024

This week, I continued improving the accuracy and fine tuning the physics model. I also helped out with some of the testing and verification methods for projecting the ball onto the table to verify its accuracy. 

When projecting the output line onto the screen, I realized that there was a flaw in my wall detection model – I was not accounting for the ball radius when accounting for the ball bouncing back against the wall. The reflection against the wall assumed that the ball center was performing a reflection against the wall, where in fact, it should have been the surface of the ball. To rectify this, I shifted all the walls towards the center of the table by the cue ball’s radius and re-performed more accurate calculations from there. This returned the correct location that the cue ball was bouncing off the wall. 

This week, I also began making preparations for the verification phase. I wrote code to project the ball’s predicted location onto the screen. Our eventual goal later on is to verify the predicted location with the actual location that the cue ball ends up hitting another ball. We will measure this difference and this should give us an idea of how accurate our prediction model is. This can be seen in the photo below: 

In total, the physics subsystem will need to verify two things. Firstly, we will have to make sure the distance between the predicted ball center and the actual ball center of the cue ball hitting the first ball is off by less than 0.2inches. The next thing that we will have to measure is that the angular difference between the predicted ball trajectory and the actual trajectory is less than 2 degrees. In order to perform these two verifications, we intend to execute 10 shots and take a recording of these shots. We will then verify from the recordings of the actual location of the cue ball hitting one of the other balls. We will manually measure the difference from the video and scale it according to the table’s size to get the actual distance error. To verify the angular difference, we will use a ball tracking software that shows the difference in the angle of their trajectories. We will calculate the average error across the ten shots. 

This week, as our trajectory output has not been fully stable yet, I only visually inspected that the ball is moving along the correct trajectory by taking a couple of shots. We also took some videos to visually verify that the ball should at least be hitting close to the location that we predicted it to be. Once we improve the stability of our ball and trajectory outputs, we intend to conduct a second round of verification testing that is more accurate as described in the paragraph above. 

Having completed the first round of visual testing this week, I am currently on track for whatever I wanted to accomplish. I am still not fully satisfied by the instability of the ball prediction output trajectory line, but our team is working very hard to find ways to make it as stable as possible. During our team meeting tomorrow, we will focus our energy on verifying the accuracy of our predictions, as well as ensuring that our predictions are at the accurate location. This will require the precise alignment between our projector and the camera. 

Tjun Jet’s Status Report for March 30, 2024

This week, we managed to successfully output the line of projection on the software in real time. This took up the majority of the week, but we finally managed to get the output line shown on the canvas. 

Many errors arose when I was trying to project the output line on the screen. Many edge cases were not considered when I was testing the output line projection on multiple different images, and this led to a lot of issues that came about in real time. One such example is accounting for the directionality of the line projection. For example, if we had a specific line that was going through two points, we couldn’t tell exactly which wall it was going to bounce off. The only way to tell which wall it was going to bounce off was to know which direction the cue stick was pointing to. Thus, we needed to find a method to find the point on the cue stick that was closest to the cue ball. By setting the dx and dy direction with the line, we were now able to figure out which way the projection should go, giving us the right projection. We can see an example image below:

The next step after this is to show this projection on the projector, which we will do so in our group meeting tomorrow. In order to project the output line, we will plot the output line on an empty black canvas. After some testing, we have realized that this was the best way for us to achieve a good line projection. Here is the picture of how we intend the projected line to look like (in the picture below, we haven’t projected the actual output line).

I am currently on track for whatever I wanted to accomplish. Over the week, I was definitely a little behind as I did not manage to get the output line in time. However, we managed to solve quite a bit of edge cases in the cue stick’s direction. Currently, our team’s biggest goal is to make sure that all our subsystems work flawlessly together. We will also need to divide and conquer to try solving the computer vision issues, and making sure that we accurately detect walls, balls, cue balls, and especially cue sticks for every frame. 

Tjun Jet’s Status Report for March 23, 2024

This week, our team continued integration efforts across the different subsystems. Our integration process was quite smooth at the beginning with the computer vision working for the various subsystems, but we ran into several edge cases for the physics implementation, which needed a re-implementation on some of our functions, which I will go through in further detail in this report. Furthermore, the wall detection did not seem to be very accurate, so I helped implement a new method to improve the accuracy of our wall detection. Finally, as we realized that we don’t have guaranteed detections of balls, walls, and pockets every frame, we decided to do a 30-frame average to fix the points, which I will also elaborate on later. 

Firstly, we realized that our wall reflection algorithm was not very accurate as it only accounted for walls that are completely horizontal or vertical. However, from our detection model, we realized that this would not always be the case, especially if our table is slanted relative to the camera. Hence, Debrina and I reimplemented the function that accounted for the reflected points on the wall. This used geometry to account for slanted walls as well. This has not been successfully completed yet, which we will hopefully complete during our meeting tomorrow.

Next, we realized that our wall detections were not completely accurate relative to the wall of the pool table. Hence, I explored a different method of wall detection that improved our detection method. This method first masked our pool table with a green color, which is the color of our pool table. Next, it did Canny Edge Detection to output the edges. This algorithm then filtered out the most horizontal and vertical lines from the walls. This returned the four walls in a more accurate manner, which we can see in the photo below. 

However, one problem we faced was that the detections were not present in every frame. This can be seen in the photo below. As the walls, pockets, and balls (at a stationary state) will not be changed throughout the course of the game, we decided to include a “calibrate” button to perform calibration. This means that we will take a 30-frame moving average and get the median detection of the walls. Since the walls are detected in most of the frames, this will allow us to find the wall, pockets, and ball detections whenever the user is not moving, instead of detecting every frame.

I am currently slightly behind schedule for whatever I wanted to accomplish. I was expecting the physics calculations to work and project the output line flawlessly, as I had got it to work on one image. However, I realized I did not account for a lot of the edge cases in terms of the cue stick’s orientation and the direction that it is pointed in. However, our team will be working together to try solving these issues together, especially within the next few days. Our team has been working really well together and consulting each other through ideating and solving each other’s problems, if any. Thus, we’re hoping to continue doing this and hopefully get the output line projected by wednesday this week, so that we can move forward in implementing our spin physics.

 

Tjun Jet’s Status Report for March 16, 2024

This week, I focused on the ethics assignment, reformatting the entire physics model to fit our new code format, and started integrating our entire code pipeline. Most of the week was spent revamping our code, making sure that we all conformed to our specific code format. Although this seemed like busy work, it was a huge step for us as it will go a long way in ensuring a more seamless integration process when we combine our codes together. 

In my previous team status report, I talked about devising a software API framework for us to conform to. It turned out to be a pretty good framework for us over the past week, as we moved all our current implementations to follow this framework. The biggest learning point for me is that if we had done this earlier, we would’ve saved a lot of time and trouble revamping our entire codebase. This is an extremely important takeaway for me as I go on to become a full-fledged engineer in the future. With that said, a good portion of the week was spent moving over my originally implemented subsystem to follow our new format. I have tested the framework with a single image, and it is working fine.

This also led to my first steps toward integrating our code. Using the output from Debrina’s Computer Vision model, I was able to parse the images from the model and put it through the physics model, which would return an output_line to project. Tomorrow, our team will meet together to try to integrate this on a real video feed. We will most likely face a few issues in alignment and integration, but we will try our best to resolve any issues that arise tomorrow. 

Another big part of this week’s progress was the ethics assignment. I did not expect to spend so much time on this assignment, but I ended up spending around 5-6 hours on this ethics assignment. I particularly found doing research on global ethical issues and how technology intertwined with politics pretty interesting. Furthermore, it was also pretty difficult to think of ethical issues that could arise with our eight-ball pool project, thus, it also took a lot of time really trying to reflect and understand our project on a deeper level in order to consider the global, cultural, and social implications that our project could encompass. As we write code to make our project successful, this assignment was quintessential in ensuring that we do not forget the importance of engineering ethics in any project. When we do our meeting tomorrow, our team will begin to discuss Step 3 of the ethics assignment as well.  

We are a few days behind schedule for whatever we wanted to accomplish. Initially, we wanted to start the integration efforts on friday. However, we felt that we were not ready to integrate our assigned parts. Thus, we decided to just convert our Friday meeting to a work session, and push back the integration process to tomorrow instead. Our team members feel more ready to continue with the integration tomorrow, and thus, we are excited to get our first taste of integrating all our subsystems together tomorrow.

Tjun Jet’s Status Report for March 9, 2024

This week, I created an API for the team’s code, and worked on various parts of the design report., and continued working on the physics model. Most of the first week was spent writing the design report, which I will go through in detail in the next few sections of the report. My group members and I didn’t do anything over break, as planned in our schedule. 

I mainly focused on two areas of the report – Architecture and Principles of Operation, and Project Management. I spent a decent amount of time on the Architecture and Principles of Operation, and it was crucial for us to present both our hardware and software designs. Thus, I spent quite a bit of time illustrating the system block diagram, mechanical structure of our pool table, and placement of items on our cue stick. Images of those are shown below. 


Furthermore, as our codebase gets larger, our code is starting to get a bit convoluted and messy. This might eventually lead to problems in the future when we are trying to integrate our code. Thus, I spent a good amount of time this week brainstorming a software API framework for our team to conform to. Our API framework follows a modal-view-controller architecture. Our high-level model keeps track of items that are constantly changing, such as an array of ball detections, the coordinates of the cue, coordinates of the cue ball, and the IMU information. Each of us will program various subsystems, but most importantly, we are constantly receiving and updating information from our model. The drawing of the predicted trajectory will be “view” functions which do not update the state of the model, and are only used for drawing. 

I am currently on schedule for whatever I wanted to accomplish. The physics model’s implementation has been completed, and we are looking to integrate all our components by this week. We will do a full live-video feed process by the end of this week, which makes us slightly ahead of schedule. Then, we will start integrating the IMU data and possibly look into more things like spin detection. But before we go into that, I believe we will spend a lot of time this week calibrating and tweaking the positions of the cameras, projectors, and the pool table to make sure everything looks aligned and meet a portion of our verification metrics.

Tjun Jet’s Status Report for February 24, 2024

This week, I implemented the physics of the ball collisions and helped to build the pool table structure. Most of the week was spent reading and understanding the different physics libraries, and even playing pool games to see if the different calculations were right. I will go through some of the math involved in my physics implementation that we are using. 

Using the principles of conservation of momentum and conservation of kinetic energy, we realized that the trajectory of the balls upon collision will always move away at 90 degrees from each other. This assumes that the balls have equal mass and that no energy is lost to sound and heat, meaning it is an elastic collision. It turns out that most of the time, the effects of sound and heat are negligible, as we took ten videos that verified the balls moved away at 90 degrees. With that in mind, we used that to code the physics trajectory model.

 

Our group also met yesterday and spent quite a bit of time assembling the pool table. Initially, we realized that our shelves were slightly smaller than our pool table, but we managed to eventually find a way to unscrew some parts of the pool table and managed to assemble the pool table onto the shelf. We managed to also place both a phone camera and the projector on top, and took some videos to test our detection models. The arrival of these parts have become a significant step to helping us achieve our goal. 

I am slightly behind schedule for whatever I wanted to accomplish. I managed to implement the mathematics of the ball collisions, but I have not managed to show the outputs onto opencv. This is mainly because the ball and cue detection models that we have now are not that accurate, thus, it is difficult for me to simulate the physics trajectory with the ball. Hence, I spent a lot of time trying to correct the accuracy of these models and have not managed to render the trajectory images. However, I am planning to have a simulation image so that I can at least show the trajectory prediction is correct by tomorrow.

Here is an image of our completed frame.

 

 

Tjun Jet’s Status Report for February 17, 2024

This week I focused on implementing the ball and cue stick detections using contour line detection using OpenCV. I also managed to create some functions to extrapolate line detections from the cue stick and output the reflections with the walls of the pool table. 

Using cv2.findContours() and cv2.minEnclosingCircles(), I managed to get a good detection of the cue balls and the cue stick. I then wrote a function to find the center and radius of each of the contours, and used that to distinguish between the cue stick and the balls. The accuracy of my contour detection was not the best, but it was enough for me to do a preliminary testing of my physics model. Some images can be found below. 

Debrina managed to successfully output the wall detections of the cue table using cv2.HoughLines. Using her output of the wall detections, I tried finding the intersection point between the cue stick’s trajectory through an extrapolation line function that Iwrote and output the predicted reflected trajectory against the pool table walls. The following image shows how our reflections work.

In terms of ballcollisions, I am currently testing between two models: A git package called pool tool: https://github.com/ekiefl/pooltool, and also referencing a project that was done from a team from Cornell University called “Pool Projection”. I have yet to successfully output a good predicted trajectory from the ball collision model from pool trajectory, but once I am able to do so, most of the work will be to fine tune the accuracy of the model and I can now present the proper outputs for Andrew to output onto the projector. 

I am currently on schedule and accomplished whatever I wanted to accomplish this week. The most crucial part next week is to get the ball collision model working. Furthermore, now I am only testing my model on a single image. Once I get everything working, I aim to perform the work on real-time video and tweak my code and parameters such that the physics simulations can be done on the video. I also want to start testing the IMU and see if we can get some substantial compass data to improve the accuracy of our cue stick detection. 

Tjun Jet’s Status Report for February 10, 2024

This week I mainly focused on setting up a Flask Server to stream our edge detection algorithm and also research on physics simulation methods to predict the trajectory of cue ball collisions.

The intent of using a Flask Server is because we want to be able to wirelessly communicate between our cameras and projectors. In order to do this, we will have to send a byte stream of data into our computer and output the image. The reason for this is to show the actual live stream of the camera feed into our web application, and also, it will help us with debugging when we are able to view the images on our computer. I successfully managed to set up the server to stream wirelessly between different computers, and the video feed looks very smooth when streamed wirelessly, with almost no lags.

In terms of physics simulations, I reference an online Git package called PoolTool: https://github.com/ekiefl/pooltool. There are some good resources on calculations of ball trajectories which I hope to take reference from and replicate for our use case. Once we are able to do so, it will be a good segway to integrate between the object detection algorithms and the physics trajectories.

I am currently on schedule and accomplished whatever I wanted to accomplish this week. Next week, I aim to actually implement the physics trajectory calculations on code. I am intending to pass in hard coded coordinates of circles to represent pool balls, as well as a hardcoded line that intersects with the cue ball. If we manage to get the object detection done, I want to use the object detection predictions as input to the algorithm, and see if we can get a real-time prediction of the physics trajectory. If that is not possible, I hope to be able to plot drawings of those images and see if I can output the trajectory.