This week, I finished developing the initial version of the feed select prediction subsystem and debugged some issues in its implementation. One bug that was in the prediction system from last week was that it would consider a lap completed as soon as one of the cameras was repeated, e.g., [1, 2, 1] would result in [1, 2] being considered a lap with camera 1 and 2 covering the car as it raced around the lap. This wasn’t the desired behavior because for a camera system with 4 cameras but with a lap such that one of the cameras would necessarily be repeated, e.g. [1, 2, 1, 3, 4] is a full lap, the system would consider the lap completed early erroneously, in this case by considering [1, 2] a full lap. I fixed this by having the lap only be considered completed if every camera in the system had been seen already. To finish developing the initial version of the feed select prediction subsystem, I had to add code that would allow the feed selection to display the feed with the largest bounding box size after the first lap was completed. Prior to this it would only make direct comparisons between bounding box sizes for each camera for the first lap, and then predictively switch to the next camera based on the order seen in the first lap for every lap after the first lap without comparing the bounding box sizes between feeds anymore. After I added the new code it would alternate between predicting the next camera and then comparing the bounding box sizes for each feed, and it would update the camera order as seen from the first lap accordingly if it turned out that the comparison of the bounding box sizes for the current lap did not match the prediction based on the order from the first lap. In this way it has become a system that uses the results from the previous lap to predict what the camera order should be for the next lap. I also found an issue with this system in testing with Jae where a camera would switch to early to the next camera because sometimes it would lose sight of the car in the middle of its defined region of the track, and the feed selection prediction system works by predicting the next camera to switch to once it has determined that the current camera has lost sight of the car. This problem was somewhat fixed by adjusting a parameter to make it take more frames of not seeing the car before the system considers the car to be “out of sight” of the current camera, but this isn’t an ideal solution because it’s not general to all tracks and camera configurations, but would need to be tuned depending on the track and camera configuration. There was also a bug where the system’s ordering of the camera order for the livestream was not being updated properly when a prediction was incorrect. This was a simple fix once we spotted it in testing, the previous index in the ordering needed to be updated instead of the current index and the index must be kept the same instead of incrementing it.
Progress is on schedule, as I have completed the initial version of the feed select prediction subsystem as mentioned in the previous status report. I need to be finished with the system features by Monday or Tuesday at the latest so we can begin testing and complete the poster on Tuesday. In this sense I am behind on testing the new subsystem. I will just need to think of test cases as I finish up the features. Then I will also need to help create the video on Thursday, demo on Friday, and put together the final report on Saturday. These are also the deliverables I hope to complete.