Bhavya’s Status Report for 4/27/24

Given that I was in charge of the final presentation a lot of my time went into working on the slides for the presentation. I made the entire slide deck and spent time practicing my presentation. Given that this was about testing and verification, I spent time with Jae gathering as much data as we could to provide reliable statistics on how our use-case requirements we being met – both coming up with the testing conditions and running the tests themselves.

After giving the presentation on Wednesday these are my ideas on how I could have improved it:

  1. We have a working model. I should have leveraged this to show off a lot more pictures, diagrams, and videos in the presentation. Our project is openCV-based and I think showing the result that each component produces in the computer vision please would have been nice for the viewers.
  2. Kept my explanations more concise and kept on time
  3. Kept it a little more casual and upbeat to engage the audience.

Other than the presentation: I also came up with the idea for the preswitchnig using the ordering of the cameras as the only way we could make the viewing experience more robust after we hit the wall on our detection limits. Given that the cameras cannot detect beyond a certain distance, having the preswitching allows for the viewer to see the car coming into the frame rather than just appearing suddenly. I think this will greatly improve the watching experience. I have been helping refine this idea.

Team Status Report for 3/23/34

Everything seems to be on track as far as implementations. Although progress has still been slow. A few problems with Git and transferring large files caused problems in collaboration. By using Git LFS we seemed to have worked through that problem.

Secondly, on testing the car speed we felt like it might be too quick for our detection and tracking. As far as the speed is concerned we felt like we had two options: using the car at its full speed or scaling real-world f1 conditions down to the toy track.

Since the current detection (yolo model) and tracking (GOTURN model) seemed too slow for the system at max speed, we decided to pivot to a hybrid between purely color-based tracking and GOTURN. This seems to be able to keep up with the toy set speeds.

There has been progress on the switching algorithms. We still need to receive the connecting pieces for the motors and the cameras to finish the stand.

We aim to have a working demo by Friday and spend the rest of the semester on integrating and fine-tuning the parameters.

Team status report for 02/24/2024

After fully defining our project idea for detecting double hits in a game of 8-ball pool, we went ahead to film shots/fouls and get a better understanding of how these shots work. Our research, accompanied by feedback from a professor who has experience in detecting these types of fouls and other pool-based projects, concluded that it would be extremely challenging to detect double hits with only 240fps cameras. Given that our budget does not allow us to purchase cameras with greater shutter speeds,  we have decided to shift our idea. Even though we had some contingency plans to assist our prediction system (accompanying the cameras), the accuracy we were hoping to achieve would simply not be possible.

Instead, we have decided to proceed with an earlier idea of a multi-camera system for tracking a car around a circuit. Given that we cannot test it on a real car, we will be using a slot car track. Cameras placed at various points on the track will pan as they detect and film the slot car. Our system will also stitch together the footage using the feed from the camera that is closest to the car at any given time to output a seamless live feed. The idea comes from trying to automize the filming of trackside cameras in circuit races eg. F1.

Due to the change in plans, we are significantly behind schedule.  We have updated the Gantt chart to adapt to our new idea.  We have ordered the track and redefined all the design presentation goals for our new idea. To get anywhere close to being on track, we are working on having some working model before Wednesday.

To do so we will be working primarily on the tracking algorithm and assessing the racetrack parameters to understand the speed and camera positions we would need to represent real-world scenarios the best we can.