Jae’s Status Report for 4/27/24

This week, I helped with testing and integration. Earlier in the week, we prepared a demo video to show in class for final presentation. I helped in setting up the system and running tests to capture a good stream with accurate tracking and feed switching. Later in the week, I helped Thomas with his feed selection algorithm debugging. As he stated in his report, there were multiple issues we had to resolve to get the system functioning. Although he worked on the majority of the code, I helped in whatever way I can to test and analyze the results.

Currently, I am on schedule as I am pretty much done with tracking algorithm. This coming week, I hope to refine tracking and even implement motor panning back to default angle once car leaves its frame. We hope to wrap up system implementation by around Tuesday, so we can spend the rest of the time on testing and working on video, poster, demo, and report. These are the deliverables I hope to achieve.

Thomas’s Status Report for 4/27/24

This week, I finished developing the initial version of the feed select prediction subsystem and debugged some issues in its implementation. One bug that was in the prediction system from last week was that it would consider a lap completed as soon as one of the cameras was repeated, e.g., [1, 2, 1] would result in [1, 2] being considered a lap with camera 1 and 2 covering the car as it raced around the lap. This wasn’t the desired behavior because for a camera system with 4 cameras but with a lap such that one of the cameras would necessarily be repeated, e.g. [1, 2, 1, 3, 4] is a full lap, the system would consider the lap completed early erroneously, in this case by considering [1, 2] a full lap. I fixed this by having the lap only be considered completed if every camera in the system had been seen already. To finish developing the initial version of the feed select prediction subsystem, I had to add code that would allow the feed selection to display the feed with the largest bounding box size after the first lap was completed. Prior to this it would only make direct comparisons between bounding box sizes for each camera for the first lap, and then predictively switch to the next camera based on the order seen in the first lap for every lap after the first lap without comparing the bounding box sizes between feeds anymore. After I added the new code it would alternate between predicting the next camera and then comparing the bounding box sizes for each feed, and it would update the camera order as seen from the first lap accordingly if it turned out that the comparison of the bounding box sizes for the current lap did not match the prediction based on the order from the first lap. In this way it has become a system that uses the results from the previous lap to predict what the camera order should be for the next lap. I also found an issue with this system in testing with Jae where a camera would switch to early to the next camera because sometimes it would lose sight of the car in the middle of its defined region of the track, and the feed selection prediction system works by predicting the next camera to switch to once it has determined that the current camera has lost sight of the car. This problem was somewhat fixed by adjusting a parameter to make it take more frames of not seeing the car before the system considers the car to be “out of sight” of the current camera, but this isn’t an ideal solution because it’s not general to all tracks and camera configurations, but would need to be tuned depending on the track and camera configuration. There was also a bug where the system’s ordering of the camera order for the livestream was not being updated properly when a prediction was incorrect. This was a simple fix once we spotted it in testing, the previous index in the ordering needed to be updated instead of the current index and the index must be kept the same instead of incrementing it.

Progress is on schedule, as I have completed the initial version of the feed select prediction subsystem as mentioned in the previous status report. I need to be finished with the system features by Monday or Tuesday at the latest so we can begin testing and complete the poster on Tuesday. In this sense I am behind on testing the new subsystem. I will just need to think of test cases as I finish up the features. Then I will also need to help create the video on Thursday, demo on Friday, and put together the final report on Saturday. These are also the deliverables I hope to complete.

Team Status Report for 4/20/24

The most significant risk that we need to manage is how late our feed switches. Although we have an algorithm that tries to switch early, our detection picks up the car late, which means our switching is always when the car passes and not before. We are trying to manage this risk by implementing a new algorithm that utilizes a history model of previous camera switching. Then, we are able to remember which camera to switch to, and is able to switch even before the camera detects a car.

This is the change in our design. We are currently in the process of getting it to work, and seeing how stable it will be. The only cost right now is time and perhaps a buggier feed switching, but if it works, we believe it will make our feed switching much better.

We are on track in our schedule. I think we should be testing right now, but I guess we are testing while implementing this new algorithm.

Jae’s Status Report for 4/20/24

I accidentally did status report 9 last week thinking it was due then. So I will update the progress of the project, but please refer to that one for the part asking for new tools/knowledge I needed.

In terms of our project progress, we were able to scale the system up to four cameras. I ensured that motor tracking worked for all four cameras. Afterwards, I provided support for feed selection testing, as we are trying out a new algorithm.

I am currently on schedule, as we are on the integration/testing period of the project.

Next week, I hope to help my team with testing. Right now we are just working feed selection algorithm, so hopefully I can provide some helpful comments and help test.

Thomas’s Status Report for 4/20/24

This week on the project, I started developing a new filtering strategy for the bounding box sizes which used a double exponential smoothing algorithm I found outlined online. Initially the performance wasn’t as good as the simple moving average algorithm I was initially using, but with some tuning of the value and trend parameters I ended up getting it to perform about as well during testing as the SMA. Next, I’d like to informally compare the two filtering algorithms to determine which one performs better on our decided track for demo day. I’ll use the eye test to determine which one is more accurate in switching to the desired camera, and which one is more desirable with respect to the timing of switching, since currently one of our goals for improvement of the feed selection algorithm is for it to switch earlier so that people can see the car coming into the camera’s field of vision instead of switching late. I updated our main branch to support 3 cameras, up from 2, for a demo during our meeting with Prof. Kim on Wednesday, and following that updated it to support 4 cameras, which is the number of cameras we will be using on demo day. I also started developing a subsystem that tracks the order that the camera feeds were displayed in in the previous laps, with the goal of using the historical information to predict the next camera to switch to, since the switch will need to occur even before the camera sees the car in order for people to be able to see the car coming into the camera’s field of vision. I haven’t been able to finish the initial implementation of this new subsystem yet, but so far I have a partially completed version of it which I am debugging. Towards this end I have made some changes to the system to make testing more efficient, by adding a feature that allows the camera feeds to switch while the livestream is paused to check what each camera was seeing when it was paused, and by adding a feature which allows each camera to capture and use its own color profile instead of one camera capturing the color profile for all the other cameras. I made it so that if the color profile captured for one of the cameras isn’t detecting the car very well, it can be easily redone without having to restart the system.

Since we are now in the last two weeks of the project, I would like to be able to complete the new subsystem I am working on by next week in order to give some slack time before our demo on Friday. Towards that end, I plan on finishing debugging my partial version by Tuesday and then ideally finishing the initial implementation by Wednesday night, leaving myself Thursday through Saturday for debugging and any additional work that might come up.

In order to accomplish my tasks during the course of this project, I have needed to learn how to write code using the OpenCV library. The learning strategy I used for this was primarily reading through the OpenCV documentation online, and following some additional 3rd party tutorial sources online when it seemed like the documentation was outdated or not as good as I would like it to be. I also needed to learn how to configure Git source control for files larger than 250 MB, because we were having trouble setting up our source control repository for our code which included a very large machine learning model in the initial stages of our project. The learning strategy I used for this was following the recommendation of the Git output when the push failed and reading through the documentation for Git LFS, which allowed us to set up source control that worked. Finally, I needed to learn how to filter noisy data in order to get a usable bounding box size time series data for the feed selection algorithm. The learning strategy I used for this was looking at online explanations of various filtering strategies starting from SMA on Wikipedia. Potentially I could have reviewed the material I learned in 18290, but unfortunately I wasn’t able to figure out which parts of the material could be helpful for me since it seems mostly mathematical and not applicable to my situation.

Team Status Report for 4/13/24

The most significant risk that could jeopardize our project currently is still color detection failures. This wasn’t able to be resolved yet. To mitigate this, we’ve been trying to work in good lighting and selecting our object of interest very carefully to get the correct colors. Another risk that we need to manage is scaling up our system. As we include more cameras, our feed selection algorithm is prone to be more difficult. The increase in overlap of the footages may mean that there are more options for feed selection. To mitigate this, we are currently in progress of incorporating the rate of change of bounding box sizes to our algorithm. We also wish to look into direction findings of the car.

We finally changed our track to the one we will be demoing. We wanted to incorporate a loop and an elevation change. We think this track would be simple, yet very telling of the capability of our system. We plan to use four cameras and locate them as shown on the picture. Other than this, our design is pretty much the same, just scaled up

Our schedule has not changed.

Jae’s Status Report for 4/13/24

This week, I spent some time discovering what track configuration would be best suited for our final demo. It took some trial and error but we wanted to include a loop of some sort, which means there has to be track elevation as well. We decided on the one I included a picture of in the team report. Additionally, I spent most of the week scaling the object tracking code to take in 3 camera inputs and control 3 motors. The final system we hope to have must have 4 cameras and motors, so I need to scale it up one more. I was also able to replace the multi-connected jumper wires with single long ones to clean it up. This took a good amount of time… since these have to run the length of the track for each motor. However, I haven’t changed the content of the code yet to make the tracking smoother, and this is something I plan to do this weekend.

My progress is somewhat on schedule. I think next week will look a bit busy, but I think it is doable.

I wish to scale up the system to 4 cameras/motors and basically have the final demo running, although a bit buggy.

I haven’t really used too many new tools. Most of the work I had to do was simple in the way that I was able to do it using my own knowledge. Like arduino code, python, servo library, soldering, … Actually, I think one thing I found helpful that I’m embarrassed to say I didn’t fully know before was git. I was able to really understand how branches worked and how to merge correctly. This took up a good amount of time in our integration, but I’m glad to have learned how to use git.

Team Status Report for 4/6/24

The most significant risks that could jeopardize the success of the project are color detection picking up objects in the background along with inconsistent lighting, and feed selection switching too late or switching randomly for track configurations where there is a slight overlap between camera zones or sharp bends. The first risk is being managed by further research on color detection and asking for help from people experienced in computer vision. The second risk is being managed by considering variations of the track and camera placements to test interesting edge cases of the track before choosing the final track for demo day. Contingency plans are using paint to color the cars in a solid and bright color or placing a lighting fixture or using a room with consistent lighting, and feed selection that takes more information into account instead of being very general.

No changes were made to the system.

Schedule is as shown in the interim demo.

Validation tests will involve running the car on a track/camera configuration test vector for a set amount of laps and reviewing the livestream output for metrics as defined in our user and design requirements, such as latency, smoothness, and optimality of feed choice based on what is in the frame.

Thomas’s Status Report for 4/6/24

This week I helped with integration of the motor control and feed select subsystems by adding features to the main loop which toggled whether each subsystem was enabled independently. I also improved the efficiency of system testing by adding a feature which allowed the color selection process to be restarted without restarting the whole system.

My progress is on schedule.

Next week, I hope to have a feed selection algorithm that has met certain verification requirements.

One test we will run is making sure the feed switches once the front of the car starts to come into the frame of a new camera feed. We can quantify this by looking at the first frame at which this condition is satisfied and looking at the frame the feed switches, and counting the number of frames in between. This will be done on both a circular track and 8-track.

Jae’s Status Report for 4/6/24

Given that this week was the interim demo, the days leading up to it was pretty packed with work. A lot of this has been tweaking numbers, setting up the environment, and debugging. That said, there is not much to show in terms of progress. What I personally worked on to debug was preventing the tracking algorithm from being distracted by buggy detection. Our current color detection is not the best, especially in bad lighting conditions. An ideal detection would output a bounding box for every couple frames of the feed, but our current detection outputs maybe 3-5 accurate boxes per lap. This meant that the motors had to be stable through wrong bounding boxes as well as infrequent boxes. To disallow the effects of a wrong bounding box, I set the code so that motors are only controlled when bounding boxes are within certain dimensions and locations. Because bounding boxes were not as frequent as desired, I also set the motor angle to pan multiple degrees when detecting a bounding box outside of the center of the frame. In an ideal world, this would be minimal degrees because we would have frequent boxes to keep updating.

My progress is on schedule. I have finished up the tracking task for the most part and have started debugging on integration and fine tuning the controls.

Next week, I hope to start working on tracking on the final demo track configuration.