I have made as many changes to the detection algorithm as I think I can given our time constraints. When running our project in different lighting conditions few changes will have to be made to the detection to best fit it for that particular setting. This includes changes to the contour size for mapping and adjusting the initial bounding box when accessing the colors. I could have made the minimum contour size a dynamic feature like I did for getting the number of colors to represent the car by, but our group is more focused on making the stream more watchable and the switching better.
One of the ideas we discussed (to make the switching better) is to use prediction to know where the car was headed. If we knew the car’s direction maybe we could know where it was going and switch preemptively. So I implemented Kalman filters – which allow me to track the card’s trajectory, and constantly update it to get the velocity and acceleration of the car. I then output it as a line of dots that shows the car’s predicted path based on the past few frames.
Unfortunately, since we do not know where the next camera is going to be, this new addition did not prove to be useful for switching. But it could still be helpful for adjusting the camera pan preemptively so that the camera does not lag behind the car – instead allowing the car to constantly be in the center of the frame.
We still have to fully integrate this feature.