Bhavya’s Status Report for 4/20/24

I have made as many changes to the detection algorithm as I think I can given our time constraints. When running our project in different lighting conditions few changes will have to be made to the detection to best fit it for that particular setting. This includes changes to the contour size for mapping and adjusting the initial bounding box when accessing the colors. I could have made the minimum contour size a dynamic feature like I did for getting the number of colors to represent the car by, but our group is more focused on making the stream more watchable and the switching better.

One of the ideas we discussed (to make the switching better) is to use prediction to know where the car was headed. If we knew the car’s direction maybe we could know where it was going and switch preemptively. So I implemented Kalman filters – which allow me to track the card’s trajectory, and constantly update it to get the velocity and acceleration of the car. I then output it as a line of dots that shows the car’s predicted path based on the past few frames.

Unfortunately, since we do not know where the next camera is going to be, this new addition did not prove to be useful for switching. But it could still be helpful for adjusting the camera pan preemptively so that the camera does not lag behind the car – instead allowing the car to constantly be in the center of the frame.

We still have to fully integrate this feature.

Bhavya’s Status Report for 4/6/24

In preparation for the interim demo, there was a lot of integration testing. We constantly had to test on different track configurations, different lighting conditions and different speeds of the car. I continued to refine the detection and tracking algorithms to based on our tests and the requirements we had for the demo. A key flaw in our system seems to be the lack of memory used to predict car position. I am currently working on integrating kalman filters into the tracking algorithm to use path based trajectory to ensure more correct location of the car.

Bhavya’s Status Report for 3/30/24

This week I spent refining my detection algorithm. After moving back to simple color and edge detection (from ML based models due to latency issues) there were challenging edge cases that I had to tackle like if there are other similarly colored objects on the feed and if there are different lighting conditions on the track. Here are the various strategies I used to solve these issues partially

  • The program first requires you to select the car (by letting you draw a box around it) so that it can perform a color analysis.
  • I could have done the color analysis in two ways that I have detailed below. I have implemented both and am still running tests to see which one performs better.
    • Either select the top colors in the box
    • Or selecting the top color and then looking for other similar shades – this method I thought would help in particular with different lighting conditions on the track where different shades could be more prominent. This is done by restricting the range of colors around the most prominent color.
  • Once the top colors are selected I also need to decide how many I need to best represent the car – too many colors makes the masking of the frame useless as it captures a lot of the background. But sometimes a few colors represent the car better than just one. I let the detection algorithm decide by observing how much of the car it could detect without detecting additional environment for different numbers of colors.
  • Once the camera detects the car, the detection algorithm is only permitted to search for the car in its immediate neighborhood in the next run.
  • Testing what is the minimum threshold to qualify for detection that prevents capturing noise. (Changing the minimum contour size that the edge detection provides)
  • Restricting the speed in which the detected box can grow (this will prevent noise from affecting it immediately)

Overall, I can currently track the car pretty well in stable lighting conditions and some color obstacles. However, the system is not completely robust and will require more testing in the final stretch of the project.

Along with this we also performed some integration testing and those details are present in the team report.

Bhavya’s Status Report for 3/16/24

I finished creating the detection algorithm. Instead of using the yOLOv4, I used the R-CNN instead. R-CNN typically provides better accuracy by employing region-based convolutional neural networks, which allow for more precise localization of objects in images, albeit at the cost of increased computational complexity during inference. The bounding boxes I was able to create were highly accurate but took a long time to be produced. Ran tests for the detection using static images of the slot car from various angles and distances. Then I integrated the pre-processing, detection, and tracking. Ran tests on a video of the slot car. Currently, the detection algorithm might be too slow, and the preprocessing needs to be tuned after testing which configuration works best for latency. I also have actual footage of the toy car on the track that I will be testing my algorithm on now.

For next week, I will be integrating the code with the live stream offered by our cameras and relaying panning instructions to the motors. Further testing on what type of tracker/detection will be best for our use case will be required.

Given the amount of fine-tuning our system will require for the live stream to be watchable I think we are slightly behind schedule. Sufficient testing in the following week should help put us back on track

Bhavya’s Status Report for 03/09/2024

After switching to the F1 track camera idea (post the design presentation) the team had to scramble to establish and work on this new idea.

I started the week off by writing the object tracking algorithm using GOTURN. I also tested several image preprocessing strategies that could possibly reduce the latency of the tracking system. I was able to achieve a preliminary tracking algorithm.

The next major task was the design document submission. I was in charge of the trade studies on computer vision strategies, implementation details of detection and tracking systems, and the outline of the testing and verification in accordance with the use case requirements. We split up the work and conducted peer reviews before finalizing the document.

Over the break, I have been working on the detection algorithm, integrating it with the tracking algorithm, and testing it on the actual slot racing car that we plan to use for the demonstration. I hope to integrate it with the camera to have a one-camera system ready by Wednesday and start arranging the muti-camera system by the end of this week.

Bhavya status report for 02/24/2024

After correspondence with Professor Shamos (who has experience in measuring the game of pool) in which I explained the concept of the double hit detection project, I learned that our project would not be feasible in the way we were planning due to the low shutter speed cameras.

This has put our work behind schedule. But the idea we pivoted too will still have some tracking so not all of my work has gone to waste. I simply have to adapt the pool tracking to car tracking now. I hope to have it done by Wednesday to show off a working model.

I helped flesh out the new idea, select materials and set the use case requirements that we aim to fulfil in this new project.

Bhavya’s Status Report for 2/17/24

After feedback from our proposal presentation, we pivoted our product from being a partial refereeing system to only a refereeing system for hard-to-detect fouls. This included the double hit and the push shot. There were several brainstorming sessions to figure out how we would tackle this new idea. Given that I am in charge of OpenCV, I still had similar tasks to accomplish. I learned more about the hough circles edge detection method and tested my code on come pool footage to see how I could track the balls. Currently, it is capable of detecting pool balls in each individual frame.  Given that we were only focused on these kinds of fouls, I also went to the UC pool tables to film some footage using different camera angles for double-hit fouls. Since our system depends on tracking fast collisions, I had to know what sort of footage I would be dealing with. I also spoke to other players about their experiences with committing and noticing these kinds of fouls.  On the weekend I helped Thomas with mapping out the solution and implementation process to make the slides for our design presentation.

Given the late pivoting of ideas, we are definitely behind our schedule. However, we are still trying to spend time combing through our design presentation to comb out implementation details.

In the coming week, I will further develop my code to detect pool sticks, add collision testing, and  try to get a lot more footage to test my code given that we will have the pool table and the camera available.

Bhavya’s Status Report for 2/10/24

Given that this was the proposal presentation week. Given that we still had a few loose ends in the idea, I helped to define/tighten the scope and use case of our project.  I also helped Jae a little with the slides for our presentation and made the Gantt chart to set the timeline for our work. Given that I will be tackling camera-based detection, my tasks for this week were:

  • Selecting a camera stand that we could place at a reasonable distance from the pool table and still have an overhead view of the game. There are a few that I have selected.
  • Selecting a camera that fits our requirements (capturing footage at a sufficient rate and quality to allow for accurate edge detection of the stick and the balls on a pool table)
  • Doing some research on OpenCV. I read up on the process that Canny edge detection and looked at other similar projects that have handled tracking on a pool table using OpenCV.
  • Reading up on pool rules and watching the footage to make myself more familiarized with what types of edge cases we could face.