Team Status Report for 4/20/24

The most significant risk that we need to manage is how late our feed switches. Although we have an algorithm that tries to switch early, our detection picks up the car late, which means our switching is always when the car passes and not before. We are trying to manage this risk by implementing a new algorithm that utilizes a history model of previous camera switching. Then, we are able to remember which camera to switch to, and is able to switch even before the camera detects a car.

This is the change in our design. We are currently in the process of getting it to work, and seeing how stable it will be. The only cost right now is time and perhaps a buggier feed switching, but if it works, we believe it will make our feed switching much better.

We are on track in our schedule. I think we should be testing right now, but I guess we are testing while implementing this new algorithm.

Jae’s Status Report for 4/20/24

I accidentally did status report 9 last week thinking it was due then. So I will update the progress of the project, but please refer to that one for the part asking for new tools/knowledge I needed.

In terms of our project progress, we were able to scale the system up to four cameras. I ensured that motor tracking worked for all four cameras. Afterwards, I provided support for feed selection testing, as we are trying out a new algorithm.

I am currently on schedule, as we are on the integration/testing period of the project.

Next week, I hope to help my team with testing. Right now we are just working feed selection algorithm, so hopefully I can provide some helpful comments and help test.

Thomas’s Status Report for 4/20/24

This week on the project, I started developing a new filtering strategy for the bounding box sizes which used a double exponential smoothing algorithm I found outlined online. Initially the performance wasn’t as good as the simple moving average algorithm I was initially using, but with some tuning of the value and trend parameters I ended up getting it to perform about as well during testing as the SMA. Next, I’d like to informally compare the two filtering algorithms to determine which one performs better on our decided track for demo day. I’ll use the eye test to determine which one is more accurate in switching to the desired camera, and which one is more desirable with respect to the timing of switching, since currently one of our goals for improvement of the feed selection algorithm is for it to switch earlier so that people can see the car coming into the camera’s field of vision instead of switching late. I updated our main branch to support 3 cameras, up from 2, for a demo during our meeting with Prof. Kim on Wednesday, and following that updated it to support 4 cameras, which is the number of cameras we will be using on demo day. I also started developing a subsystem that tracks the order that the camera feeds were displayed in in the previous laps, with the goal of using the historical information to predict the next camera to switch to, since the switch will need to occur even before the camera sees the car in order for people to be able to see the car coming into the camera’s field of vision. I haven’t been able to finish the initial implementation of this new subsystem yet, but so far I have a partially completed version of it which I am debugging. Towards this end I have made some changes to the system to make testing more efficient, by adding a feature that allows the camera feeds to switch while the livestream is paused to check what each camera was seeing when it was paused, and by adding a feature which allows each camera to capture and use its own color profile instead of one camera capturing the color profile for all the other cameras. I made it so that if the color profile captured for one of the cameras isn’t detecting the car very well, it can be easily redone without having to restart the system.

Since we are now in the last two weeks of the project, I would like to be able to complete the new subsystem I am working on by next week in order to give some slack time before our demo on Friday. Towards that end, I plan on finishing debugging my partial version by Tuesday and then ideally finishing the initial implementation by Wednesday night, leaving myself Thursday through Saturday for debugging and any additional work that might come up.

In order to accomplish my tasks during the course of this project, I have needed to learn how to write code using the OpenCV library. The learning strategy I used for this was primarily reading through the OpenCV documentation online, and following some additional 3rd party tutorial sources online when it seemed like the documentation was outdated or not as good as I would like it to be. I also needed to learn how to configure Git source control for files larger than 250 MB, because we were having trouble setting up our source control repository for our code which included a very large machine learning model in the initial stages of our project. The learning strategy I used for this was following the recommendation of the Git output when the push failed and reading through the documentation for Git LFS, which allowed us to set up source control that worked. Finally, I needed to learn how to filter noisy data in order to get a usable bounding box size time series data for the feed selection algorithm. The learning strategy I used for this was looking at online explanations of various filtering strategies starting from SMA on Wikipedia. Potentially I could have reviewed the material I learned in 18290, but unfortunately I wasn’t able to figure out which parts of the material could be helpful for me since it seems mostly mathematical and not applicable to my situation.

Team Status Report for 4/13/24

The most significant risk that could jeopardize our project currently is still color detection failures. This wasn’t able to be resolved yet. To mitigate this, we’ve been trying to work in good lighting and selecting our object of interest very carefully to get the correct colors. Another risk that we need to manage is scaling up our system. As we include more cameras, our feed selection algorithm is prone to be more difficult. The increase in overlap of the footages may mean that there are more options for feed selection. To mitigate this, we are currently in progress of incorporating the rate of change of bounding box sizes to our algorithm. We also wish to look into direction findings of the car.

We finally changed our track to the one we will be demoing. We wanted to incorporate a loop and an elevation change. We think this track would be simple, yet very telling of the capability of our system. We plan to use four cameras and locate them as shown on the picture. Other than this, our design is pretty much the same, just scaled up

Our schedule has not changed.

Jae’s Status Report for 4/13/24

This week, I spent some time discovering what track configuration would be best suited for our final demo. It took some trial and error but we wanted to include a loop of some sort, which means there has to be track elevation as well. We decided on the one I included a picture of in the team report. Additionally, I spent most of the week scaling the object tracking code to take in 3 camera inputs and control 3 motors. The final system we hope to have must have 4 cameras and motors, so I need to scale it up one more. I was also able to replace the multi-connected jumper wires with single long ones to clean it up. This took a good amount of time… since these have to run the length of the track for each motor. However, I haven’t changed the content of the code yet to make the tracking smoother, and this is something I plan to do this weekend.

My progress is somewhat on schedule. I think next week will look a bit busy, but I think it is doable.

I wish to scale up the system to 4 cameras/motors and basically have the final demo running, although a bit buggy.

I haven’t really used too many new tools. Most of the work I had to do was simple in the way that I was able to do it using my own knowledge. Like arduino code, python, servo library, soldering, … Actually, I think one thing I found helpful that I’m embarrassed to say I didn’t fully know before was git. I was able to really understand how branches worked and how to merge correctly. This took up a good amount of time in our integration, but I’m glad to have learned how to use git.

Bhavya’s Status Report for 4/6/24

In preparation for the interim demo, there was a lot of integration testing. We constantly had to test on different track configurations, different lighting conditions and different speeds of the car. I continued to refine the detection and tracking algorithms to based on our tests and the requirements we had for the demo. A key flaw in our system seems to be the lack of memory used to predict car position. I am currently working on integrating kalman filters into the tracking algorithm to use path based trajectory to ensure more correct location of the car.

Team Status Report for 4/6/24

The most significant risks that could jeopardize the success of the project are color detection picking up objects in the background along with inconsistent lighting, and feed selection switching too late or switching randomly for track configurations where there is a slight overlap between camera zones or sharp bends. The first risk is being managed by further research on color detection and asking for help from people experienced in computer vision. The second risk is being managed by considering variations of the track and camera placements to test interesting edge cases of the track before choosing the final track for demo day. Contingency plans are using paint to color the cars in a solid and bright color or placing a lighting fixture or using a room with consistent lighting, and feed selection that takes more information into account instead of being very general.

No changes were made to the system.

Schedule is as shown in the interim demo.

Validation tests will involve running the car on a track/camera configuration test vector for a set amount of laps and reviewing the livestream output for metrics as defined in our user and design requirements, such as latency, smoothness, and optimality of feed choice based on what is in the frame.

Thomas’s Status Report for 4/6/24

This week I helped with integration of the motor control and feed select subsystems by adding features to the main loop which toggled whether each subsystem was enabled independently. I also improved the efficiency of system testing by adding a feature which allowed the color selection process to be restarted without restarting the whole system.

My progress is on schedule.

Next week, I hope to have a feed selection algorithm that has met certain verification requirements.

One test we will run is making sure the feed switches once the front of the car starts to come into the frame of a new camera feed. We can quantify this by looking at the first frame at which this condition is satisfied and looking at the frame the feed switches, and counting the number of frames in between. This will be done on both a circular track and 8-track.

Jae’s Status Report for 4/6/24

Given that this week was the interim demo, the days leading up to it was pretty packed with work. A lot of this has been tweaking numbers, setting up the environment, and debugging. That said, there is not much to show in terms of progress. What I personally worked on to debug was preventing the tracking algorithm from being distracted by buggy detection. Our current color detection is not the best, especially in bad lighting conditions. An ideal detection would output a bounding box for every couple frames of the feed, but our current detection outputs maybe 3-5 accurate boxes per lap. This meant that the motors had to be stable through wrong bounding boxes as well as infrequent boxes. To disallow the effects of a wrong bounding box, I set the code so that motors are only controlled when bounding boxes are within certain dimensions and locations. Because bounding boxes were not as frequent as desired, I also set the motor angle to pan multiple degrees when detecting a bounding box outside of the center of the frame. In an ideal world, this would be minimal degrees because we would have frequent boxes to keep updating.

My progress is on schedule. I have finished up the tracking task for the most part and have started debugging on integration and fine tuning the controls.

Next week, I hope to start working on tracking on the final demo track configuration.

Thomas’s Status Report for 3/30/24

This week, I got the GitHub repository working for my teammates and I by setting up Git LFS. I got a basic feed switching algorithm up and running which worked by storing the bounding box sizes returned by a combination of the Canny edge object detection and the GOTURN tracking algorithms and switching to a camera if the bounding box sizes increased for 3 consecutive frames. It wasn’t working very well when I tested it though. When the object detection algorithm moved to color-based detection, I merged my feed selection code with the new object detection code and updated it to support 2 USB cameras instead of just 1 USB camera. This allowed me to make a testing setup that was close to what we would have for the interim demo, except the cameras were stationary instead of tracking the car as it moved. The resulting livestream feed was still unsatisfactory because it either seemed to switch randomly or not switch when it was supposed to, depending on the number of consecutive frames I set for a switch to occur. However it was still progress. Next I moved on to updating my feed selection algorithm to calculate a simple moving average over the past 3 frames for each camera, displaying the camera feed with the greatest moving average. I also added debug statements printing to a text file which allowed me to see what the bounding box sizes were for each camera for each frame, and the frame and time at which the camera feed being displayed was switched. This new algorithm for switching performed much better, being able to consistently capture the front of the car as it raced down the track with 2 cameras pointed in opposite directions at opposite corners of the rectangular track, which was a good improvement from the previous algorithm. However, when the cameras were brought into more overlapping fields of view over the track the switching appeared to switch too quickly and randomly, so there is more improvement needed.

I am on schedule, having a working algorithm for interim demo along with a testing framework that can help me improve the switching algorithm through looking at the logs and comparing them to test cases. A reach goal is for interim demo to show some of these test cases, but for now the focus is the standard rectangular track.

Next steps are to merge with the updated color detection algorithm and integrate the motor control code allowing the cameras to automatically track the cars. I need to identify specific track & camera configurations for testing next week that can show how our product meets its use case requirements. These testing configurations will also be important in helping me identify where I need to improve with regards to my simple moving average algorithm. I need to test integration with the motor control code and ensure the switching algorithm works consistently even with auto-tracking cameras that may sometimes have overlap in seeing the car.