Everything seems to be on track as far as implementations. Although progress has still been slow. A few problems with Git and transferring large files caused problems in collaboration. By using Git LFS we seemed to have worked through that problem.
Secondly, on testing the car speed we felt like it might be too quick for our detection and tracking. As far as the speed is concerned we felt like we had two options: using the car at its full speed or scaling real-world f1 conditions down to the toy track.
Since the current detection (yolo model) and tracking (GOTURN model) seemed too slow for the system at max speed, we decided to pivot to a hybrid between purely color-based tracking and GOTURN. This seems to be able to keep up with the toy set speeds.
There has been progress on the switching algorithms. We still need to receive the connecting pieces for the motors and the cameras to finish the stand.
We aim to have a working demo by Friday and spend the rest of the semester on integrating and fine-tuning the parameters.