Bhavya’s Status Report for 4/6/24

In preparation for the interim demo, there was a lot of integration testing. We constantly had to test on different track configurations, different lighting conditions and different speeds of the car. I continued to refine the detection and tracking algorithms to based on our tests and the requirements we had for the demo. A key flaw in our system seems to be the lack of memory used to predict car position. I am currently working on integrating kalman filters into the tracking algorithm to use path based trajectory to ensure more correct location of the car.

Team Status Report for 4/6/24

The most significant risks that could jeopardize the success of the project are color detection picking up objects in the background along with inconsistent lighting, and feed selection switching too late or switching randomly for track configurations where there is a slight overlap between camera zones or sharp bends. The first risk is being managed by further research on color detection and asking for help from people experienced in computer vision. The second risk is being managed by considering variations of the track and camera placements to test interesting edge cases of the track before choosing the final track for demo day. Contingency plans are using paint to color the cars in a solid and bright color or placing a lighting fixture or using a room with consistent lighting, and feed selection that takes more information into account instead of being very general.

No changes were made to the system.

Schedule is as shown in the interim demo.

Validation tests will involve running the car on a track/camera configuration test vector for a set amount of laps and reviewing the livestream output for metrics as defined in our user and design requirements, such as latency, smoothness, and optimality of feed choice based on what is in the frame.

Thomas’s Status Report for 4/6/24

This week I helped with integration of the motor control and feed select subsystems by adding features to the main loop which toggled whether each subsystem was enabled independently. I also improved the efficiency of system testing by adding a feature which allowed the color selection process to be restarted without restarting the whole system.

My progress is on schedule.

Next week, I hope to have a feed selection algorithm that has met certain verification requirements.

One test we will run is making sure the feed switches once the front of the car starts to come into the frame of a new camera feed. We can quantify this by looking at the first frame at which this condition is satisfied and looking at the frame the feed switches, and counting the number of frames in between. This will be done on both a circular track and 8-track.

Jae’s Status Report for 4/6/24

Given that this week was the interim demo, the days leading up to it was pretty packed with work. A lot of this has been tweaking numbers, setting up the environment, and debugging. That said, there is not much to show in terms of progress. What I personally worked on to debug was preventing the tracking algorithm from being distracted by buggy detection. Our current color detection is not the best, especially in bad lighting conditions. An ideal detection would output a bounding box for every couple frames of the feed, but our current detection outputs maybe 3-5 accurate boxes per lap. This meant that the motors had to be stable through wrong bounding boxes as well as infrequent boxes. To disallow the effects of a wrong bounding box, I set the code so that motors are only controlled when bounding boxes are within certain dimensions and locations. Because bounding boxes were not as frequent as desired, I also set the motor angle to pan multiple degrees when detecting a bounding box outside of the center of the frame. In an ideal world, this would be minimal degrees because we would have frequent boxes to keep updating.

My progress is on schedule. I have finished up the tracking task for the most part and have started debugging on integration and fine tuning the controls.

Next week, I hope to start working on tracking on the final demo track configuration.

Thomas’s Status Report for 3/30/24

This week, I got the GitHub repository working for my teammates and I by setting up Git LFS. I got a basic feed switching algorithm up and running which worked by storing the bounding box sizes returned by a combination of the Canny edge object detection and the GOTURN tracking algorithms and switching to a camera if the bounding box sizes increased for 3 consecutive frames. It wasn’t working very well when I tested it though. When the object detection algorithm moved to color-based detection, I merged my feed selection code with the new object detection code and updated it to support 2 USB cameras instead of just 1 USB camera. This allowed me to make a testing setup that was close to what we would have for the interim demo, except the cameras were stationary instead of tracking the car as it moved. The resulting livestream feed was still unsatisfactory because it either seemed to switch randomly or not switch when it was supposed to, depending on the number of consecutive frames I set for a switch to occur. However it was still progress. Next I moved on to updating my feed selection algorithm to calculate a simple moving average over the past 3 frames for each camera, displaying the camera feed with the greatest moving average. I also added debug statements printing to a text file which allowed me to see what the bounding box sizes were for each camera for each frame, and the frame and time at which the camera feed being displayed was switched. This new algorithm for switching performed much better, being able to consistently capture the front of the car as it raced down the track with 2 cameras pointed in opposite directions at opposite corners of the rectangular track, which was a good improvement from the previous algorithm. However, when the cameras were brought into more overlapping fields of view over the track the switching appeared to switch too quickly and randomly, so there is more improvement needed.

I am on schedule, having a working algorithm for interim demo along with a testing framework that can help me improve the switching algorithm through looking at the logs and comparing them to test cases. A reach goal is for interim demo to show some of these test cases, but for now the focus is the standard rectangular track.

Next steps are to merge with the updated color detection algorithm and integrate the motor control code allowing the cameras to automatically track the cars. I need to identify specific track & camera configurations for testing next week that can show how our product meets its use case requirements. These testing configurations will also be important in helping me identify where I need to improve with regards to my simple moving average algorithm. I need to test integration with the motor control code and ensure the switching algorithm works consistently even with auto-tracking cameras that may sometimes have overlap in seeing the car.

Team Status Report for 3/30/24

Right now, the most significant risk that could jeopardize the success of the project is if our detection algorithm, which is currently done through color detection, is interfered by the environment. Bhavya is working hard on minimizing the interference of the environment after switching over to color detection from object detection(which took too long) but the algorithm is definitely not perfect yet. This is a risk because both the motor control and feed selection modules depend on the accuracy of the bounding boxes outputted by detection. So if detection is a little shaky, the motors could inaccurately pan and feed selection may be wrong. Bhavya is working on mitigating these risks as said in his status report. But the contingency plan is running the system with good lighting and at location/angle where the car along the track never overlaps with another object/background of the same color.

One of the changes we made was using a manual voltage supply to power the race track. We saw that the 9V power supply was providing an electromagnetic field too high, causing the cars to travel fast. By lowering the voltage supply to approximately 4V, the car moves at a slower speed, allowing our system to more easily test. We needed to make this change for now, but we hope to bump up the speed as fast as we can accurately track. No costs were made to our camera system due to this change, since it is a change within the track itself. Another change we made was utilizing simple moving average for feed selection. This means that we are not only changing feeds based on bounding box sizes, but the moving average of it with width of 3. This has been working when the cameras were placed in opposite corners pointing towards the car direction.

Now that we are at the integration period, schedule is up to date.

We didn’t take any convincing photos, but we are looking forward to showing our progress during interim demo next week:)

Jae’s Status Report for 3/30/24

This week, I mainly worked on interfacing with object detection module. Since my motor control module works on the Arduino side, I needed to find a way to take the bounding box of the detected object from the detection module and use it to tell the motors to pan left or right by how many degrees. For this, and for now, I am using a simple algorithm where I place an imaginary box in the middle of the screen and if the detected object’s center point is to the left or right of the imaginary box, the function sends a serial data to the Arduino telling it to pan left or right x number of degrees. Now the tricky part is the smoothening of the panning. The two factors that contribute most to making panning as best as possible are the imaginary box width and the number of degrees the motor should turn at a command. Currently, I am at 480 for width out of 640 of the whole frame and 7 degrees panning, since we expect the car to be moving when the camera is capturing it. I will do more testing to finalize these values.

Additionally, something I worked on this week was finding a way to slow down the car, as it was moving too fast for testing purposes. From professor Kim’s advice, we attempted to manually control the voltage supply of the track instead of plugging in directly through 9V adapter. I removed the batteries and adapter and took apart the powered area of the track. I soldered two wires to the ground and power ends and clipped it onto a dc power supply and it worked perfect. The voltage we are using for testing is set at 4V, which is significantly slower than before.

Camera assembly has also been wrapped up this week, as the screws came in finally. Although it functions, I will try to make it more stable in the next few weeks when I get the chance.

My progress is now up to schedule.

Next week, I hope to have an integrated,  demo-able system. Additionally, I wish to keep fine tuning the motor controls as well as stabilize the camera stands.

Bhavya’s Status Report for 3/30/24

This week I spent refining my detection algorithm. After moving back to simple color and edge detection (from ML based models due to latency issues) there were challenging edge cases that I had to tackle like if there are other similarly colored objects on the feed and if there are different lighting conditions on the track. Here are the various strategies I used to solve these issues partially

  • The program first requires you to select the car (by letting you draw a box around it) so that it can perform a color analysis.
  • I could have done the color analysis in two ways that I have detailed below. I have implemented both and am still running tests to see which one performs better.
    • Either select the top colors in the box
    • Or selecting the top color and then looking for other similar shades – this method I thought would help in particular with different lighting conditions on the track where different shades could be more prominent. This is done by restricting the range of colors around the most prominent color.
  • Once the top colors are selected I also need to decide how many I need to best represent the car – too many colors makes the masking of the frame useless as it captures a lot of the background. But sometimes a few colors represent the car better than just one. I let the detection algorithm decide by observing how much of the car it could detect without detecting additional environment for different numbers of colors.
  • Once the camera detects the car, the detection algorithm is only permitted to search for the car in its immediate neighborhood in the next run.
  • Testing what is the minimum threshold to qualify for detection that prevents capturing noise. (Changing the minimum contour size that the edge detection provides)
  • Restricting the speed in which the detected box can grow (this will prevent noise from affecting it immediately)

Overall, I can currently track the car pretty well in stable lighting conditions and some color obstacles. However, the system is not completely robust and will require more testing in the final stretch of the project.

Along with this we also performed some integration testing and those details are present in the team report.

Team Status Report for 3/23/34

Everything seems to be on track as far as implementations. Although progress has still been slow. A few problems with Git and transferring large files caused problems in collaboration. By using Git LFS we seemed to have worked through that problem.

Secondly, on testing the car speed we felt like it might be too quick for our detection and tracking. As far as the speed is concerned we felt like we had two options: using the car at its full speed or scaling real-world f1 conditions down to the toy track.

Since the current detection (yolo model) and tracking (GOTURN model) seemed too slow for the system at max speed, we decided to pivot to a hybrid between purely color-based tracking and GOTURN. This seems to be able to keep up with the toy set speeds.

There has been progress on the switching algorithms. We still need to receive the connecting pieces for the motors and the cameras to finish the stand.

We aim to have a working demo by Friday and spend the rest of the semester on integrating and fine-tuning the parameters.

Thomas’s Status Report for 3/23/24

This week, I was able to work on a basic feed selection algorithm which switched to a new camera feed every time it saw consecutively increasing bounding box sizes for 3 frames. I have asked Bhavya to test it with his object detection code with his webcam and using his hand to move the car and I have also prepared it to be tested on a looping video with 2 perspectives of the car (from the bottom left corner and the bottom side) tomorrow. The number of frames that need to be seen consecutively increasing in size can be varied to experiment with what would make for the best switching. More advanced algorithms are still in the idea phase, one of which involves switching when the front of the car is identified in one of the frames using a custom trained cascade classifier. This is working towards the design document proposed approach which combines both of these metrics for switching. I have also tried out an idea that acts as a risk mitigation plan for the car’s speed as it races around the track, which is taping quarters to the car to slow it down. I was able to get 5 quarters on the car which did slow it down noticeably, taking at least a few more seconds to finish 10 laps around a small track. The drawback of this strategy is that it is a bit more likely for the car to run off the track during tight turns at high speeds, so that will be one challenge we will have to account for in our track design and crash system requirements if we end up needing this risk mitigation strategy for our project. We will also have to adjust our object detection algorithms if we use tape since the car looks different especially if we are custom training them.

My progress is behind schedule by a little bit since I’m interfacing with Bhavya’s CV code for the first time tomorrow, so hopefully that goes well. To stay on track I’ll need to have our code integrated fully tomorrow and then as it gets updated use Git source control to keep our code integrated as we both continue working on our sides. I’ll need to get the best performance out of the bounding box size algorithm by Monday ideally and the cascade classifier-based algorithm trained and tested a bit by Wednesday. By next Saturday I’ll need to have integrated these into as solid of an algorithm that I can get for the interim demo.