Bhavya’s Status Report for 4/27/24

Given that I was in charge of the final presentation a lot of my time went into working on the slides for the presentation. I made the entire slide deck and spent time practicing my presentation. Given that this was about testing and verification, I spent time with Jae gathering as much data as we could to provide reliable statistics on how our use-case requirements we being met – both coming up with the testing conditions and running the tests themselves.

After giving the presentation on Wednesday these are my ideas on how I could have improved it:

  1. We have a working model. I should have leveraged this to show off a lot more pictures, diagrams, and videos in the presentation. Our project is openCV-based and I think showing the result that each component produces in the computer vision please would have been nice for the viewers.
  2. Kept my explanations more concise and kept on time
  3. Kept it a little more casual and upbeat to engage the audience.

Other than the presentation: I also came up with the idea for the preswitchnig using the ordering of the cameras as the only way we could make the viewing experience more robust after we hit the wall on our detection limits. Given that the cameras cannot detect beyond a certain distance, having the preswitching allows for the viewer to see the car coming into the frame rather than just appearing suddenly. I think this will greatly improve the watching experience. I have been helping refine this idea.

Bhavya’s Status Report for 4/20/24

I have made as many changes to the detection algorithm as I think I can given our time constraints. When running our project in different lighting conditions few changes will have to be made to the detection to best fit it for that particular setting. This includes changes to the contour size for mapping and adjusting the initial bounding box when accessing the colors. I could have made the minimum contour size a dynamic feature like I did for getting the number of colors to represent the car by, but our group is more focused on making the stream more watchable and the switching better.

One of the ideas we discussed (to make the switching better) is to use prediction to know where the car was headed. If we knew the car’s direction maybe we could know where it was going and switch preemptively. So I implemented Kalman filters – which allow me to track the card’s trajectory, and constantly update it to get the velocity and acceleration of the car. I then output it as a line of dots that shows the car’s predicted path based on the past few frames.

Unfortunately, since we do not know where the next camera is going to be, this new addition did not prove to be useful for switching. But it could still be helpful for adjusting the camera pan preemptively so that the camera does not lag behind the car – instead allowing the car to constantly be in the center of the frame.

We still have to fully integrate this feature.

Team Status Report for 04/27/24

This week was our final presentation in front of our Capstone group. We wanted to get a good working demo before our presentation and possibly have a video to show off our progress. We were able to achieve this!

Given that the report was more so based on our testing, verification, and validation, there was a significant push towards finalizing our testing strategies and collecting data.

Here is a brief outline of our major tests:

  1. Understand how many frames we get per camera for different power levels. This helps us process how well the feed would look for any single-camera
  2. Testing the switching of the camera. This included proximity tests where the car would be moved back and forth between the cameras to test whether the switch would take place as predicted
  3. Detection tests through positioning the car in different orientations and (a few different lighting conditions). These tests did not seem to be conclusive as the number of orientations and lighting conditions can vary a lot. We ideally should have tested at the extremes of the design requirements but reproducing lighting conditions proved to be challenging.

While we could not fully account for different lighting changes we did make a change to accommodate the slower switching process that takes into account what the order of the cameras is. This order is recorded after the initial lap and updated to ensure any camera removals or additions are seamless. (In the case that a camera has to be shut down/ started up mid-race)

To fully accommodate this new change we will need more testing to make sure the ordering works. One key edge case here is when there is rapid switching between the cameras due to detection of noise/ due to true detection of the car, but given their rapidness, it makes the stream unwatchable. Given that quality of stream is an important design requirement, we will need to make some provisions in the switching algorithm to make sure we deal with this edge case.

Jae’s Status Report for 4/27/24

This week, I helped with testing and integration. Earlier in the week, we prepared a demo video to show in class for final presentation. I helped in setting up the system and running tests to capture a good stream with accurate tracking and feed switching. Later in the week, I helped Thomas with his feed selection algorithm debugging. As he stated in his report, there were multiple issues we had to resolve to get the system functioning. Although he worked on the majority of the code, I helped in whatever way I can to test and analyze the results.

Currently, I am on schedule as I am pretty much done with tracking algorithm. This coming week, I hope to refine tracking and even implement motor panning back to default angle once car leaves its frame. We hope to wrap up system implementation by around Tuesday, so we can spend the rest of the time on testing and working on video, poster, demo, and report. These are the deliverables I hope to achieve.

Thomas’s Status Report for 4/27/24

This week, I finished developing the initial version of the feed select prediction subsystem and debugged some issues in its implementation. One bug that was in the prediction system from last week was that it would consider a lap completed as soon as one of the cameras was repeated, e.g., [1, 2, 1] would result in [1, 2] being considered a lap with camera 1 and 2 covering the car as it raced around the lap. This wasn’t the desired behavior because for a camera system with 4 cameras but with a lap such that one of the cameras would necessarily be repeated, e.g. [1, 2, 1, 3, 4] is a full lap, the system would consider the lap completed early erroneously, in this case by considering [1, 2] a full lap. I fixed this by having the lap only be considered completed if every camera in the system had been seen already. To finish developing the initial version of the feed select prediction subsystem, I had to add code that would allow the feed selection to display the feed with the largest bounding box size after the first lap was completed. Prior to this it would only make direct comparisons between bounding box sizes for each camera for the first lap, and then predictively switch to the next camera based on the order seen in the first lap for every lap after the first lap without comparing the bounding box sizes between feeds anymore. After I added the new code it would alternate between predicting the next camera and then comparing the bounding box sizes for each feed, and it would update the camera order as seen from the first lap accordingly if it turned out that the comparison of the bounding box sizes for the current lap did not match the prediction based on the order from the first lap. In this way it has become a system that uses the results from the previous lap to predict what the camera order should be for the next lap. I also found an issue with this system in testing with Jae where a camera would switch to early to the next camera because sometimes it would lose sight of the car in the middle of its defined region of the track, and the feed selection prediction system works by predicting the next camera to switch to once it has determined that the current camera has lost sight of the car. This problem was somewhat fixed by adjusting a parameter to make it take more frames of not seeing the car before the system considers the car to be “out of sight” of the current camera, but this isn’t an ideal solution because it’s not general to all tracks and camera configurations, but would need to be tuned depending on the track and camera configuration. There was also a bug where the system’s ordering of the camera order for the livestream was not being updated properly when a prediction was incorrect. This was a simple fix once we spotted it in testing, the previous index in the ordering needed to be updated instead of the current index and the index must be kept the same instead of incrementing it.

Progress is on schedule, as I have completed the initial version of the feed select prediction subsystem as mentioned in the previous status report. I need to be finished with the system features by Monday or Tuesday at the latest so we can begin testing and complete the poster on Tuesday. In this sense I am behind on testing the new subsystem. I will just need to think of test cases as I finish up the features. Then I will also need to help create the video on Thursday, demo on Friday, and put together the final report on Saturday. These are also the deliverables I hope to complete.

Team Status Report for 4/20/24

The most significant risk that we need to manage is how late our feed switches. Although we have an algorithm that tries to switch early, our detection picks up the car late, which means our switching is always when the car passes and not before. We are trying to manage this risk by implementing a new algorithm that utilizes a history model of previous camera switching. Then, we are able to remember which camera to switch to, and is able to switch even before the camera detects a car.

This is the change in our design. We are currently in the process of getting it to work, and seeing how stable it will be. The only cost right now is time and perhaps a buggier feed switching, but if it works, we believe it will make our feed switching much better.

We are on track in our schedule. I think we should be testing right now, but I guess we are testing while implementing this new algorithm.

Jae’s Status Report for 4/20/24

I accidentally did status report 9 last week thinking it was due then. So I will update the progress of the project, but please refer to that one for the part asking for new tools/knowledge I needed.

In terms of our project progress, we were able to scale the system up to four cameras. I ensured that motor tracking worked for all four cameras. Afterwards, I provided support for feed selection testing, as we are trying out a new algorithm.

I am currently on schedule, as we are on the integration/testing period of the project.

Next week, I hope to help my team with testing. Right now we are just working feed selection algorithm, so hopefully I can provide some helpful comments and help test.

Thomas’s Status Report for 4/20/24

This week on the project, I started developing a new filtering strategy for the bounding box sizes which used a double exponential smoothing algorithm I found outlined online. Initially the performance wasn’t as good as the simple moving average algorithm I was initially using, but with some tuning of the value and trend parameters I ended up getting it to perform about as well during testing as the SMA. Next, I’d like to informally compare the two filtering algorithms to determine which one performs better on our decided track for demo day. I’ll use the eye test to determine which one is more accurate in switching to the desired camera, and which one is more desirable with respect to the timing of switching, since currently one of our goals for improvement of the feed selection algorithm is for it to switch earlier so that people can see the car coming into the camera’s field of vision instead of switching late. I updated our main branch to support 3 cameras, up from 2, for a demo during our meeting with Prof. Kim on Wednesday, and following that updated it to support 4 cameras, which is the number of cameras we will be using on demo day. I also started developing a subsystem that tracks the order that the camera feeds were displayed in in the previous laps, with the goal of using the historical information to predict the next camera to switch to, since the switch will need to occur even before the camera sees the car in order for people to be able to see the car coming into the camera’s field of vision. I haven’t been able to finish the initial implementation of this new subsystem yet, but so far I have a partially completed version of it which I am debugging. Towards this end I have made some changes to the system to make testing more efficient, by adding a feature that allows the camera feeds to switch while the livestream is paused to check what each camera was seeing when it was paused, and by adding a feature which allows each camera to capture and use its own color profile instead of one camera capturing the color profile for all the other cameras. I made it so that if the color profile captured for one of the cameras isn’t detecting the car very well, it can be easily redone without having to restart the system.

Since we are now in the last two weeks of the project, I would like to be able to complete the new subsystem I am working on by next week in order to give some slack time before our demo on Friday. Towards that end, I plan on finishing debugging my partial version by Tuesday and then ideally finishing the initial implementation by Wednesday night, leaving myself Thursday through Saturday for debugging and any additional work that might come up.

In order to accomplish my tasks during the course of this project, I have needed to learn how to write code using the OpenCV library. The learning strategy I used for this was primarily reading through the OpenCV documentation online, and following some additional 3rd party tutorial sources online when it seemed like the documentation was outdated or not as good as I would like it to be. I also needed to learn how to configure Git source control for files larger than 250 MB, because we were having trouble setting up our source control repository for our code which included a very large machine learning model in the initial stages of our project. The learning strategy I used for this was following the recommendation of the Git output when the push failed and reading through the documentation for Git LFS, which allowed us to set up source control that worked. Finally, I needed to learn how to filter noisy data in order to get a usable bounding box size time series data for the feed selection algorithm. The learning strategy I used for this was looking at online explanations of various filtering strategies starting from SMA on Wikipedia. Potentially I could have reviewed the material I learned in 18290, but unfortunately I wasn’t able to figure out which parts of the material could be helpful for me since it seems mostly mathematical and not applicable to my situation.

Team Status Report for 4/13/24

The most significant risk that could jeopardize our project currently is still color detection failures. This wasn’t able to be resolved yet. To mitigate this, we’ve been trying to work in good lighting and selecting our object of interest very carefully to get the correct colors. Another risk that we need to manage is scaling up our system. As we include more cameras, our feed selection algorithm is prone to be more difficult. The increase in overlap of the footages may mean that there are more options for feed selection. To mitigate this, we are currently in progress of incorporating the rate of change of bounding box sizes to our algorithm. We also wish to look into direction findings of the car.

We finally changed our track to the one we will be demoing. We wanted to incorporate a loop and an elevation change. We think this track would be simple, yet very telling of the capability of our system. We plan to use four cameras and locate them as shown on the picture. Other than this, our design is pretty much the same, just scaled up

Our schedule has not changed.

Jae’s Status Report for 4/13/24

This week, I spent some time discovering what track configuration would be best suited for our final demo. It took some trial and error but we wanted to include a loop of some sort, which means there has to be track elevation as well. We decided on the one I included a picture of in the team report. Additionally, I spent most of the week scaling the object tracking code to take in 3 camera inputs and control 3 motors. The final system we hope to have must have 4 cameras and motors, so I need to scale it up one more. I was also able to replace the multi-connected jumper wires with single long ones to clean it up. This took a good amount of time… since these have to run the length of the track for each motor. However, I haven’t changed the content of the code yet to make the tracking smoother, and this is something I plan to do this weekend.

My progress is somewhat on schedule. I think next week will look a bit busy, but I think it is doable.

I wish to scale up the system to 4 cameras/motors and basically have the final demo running, although a bit buggy.

I haven’t really used too many new tools. Most of the work I had to do was simple in the way that I was able to do it using my own knowledge. Like arduino code, python, servo library, soldering, … Actually, I think one thing I found helpful that I’m embarrassed to say I didn’t fully know before was git. I was able to really understand how branches worked and how to merge correctly. This took up a good amount of time in our integration, but I’m glad to have learned how to use git.