Thomas’s Status Report for 3/30/24

This week, I got the GitHub repository working for my teammates and I by setting up Git LFS. I got a basic feed switching algorithm up and running which worked by storing the bounding box sizes returned by a combination of the Canny edge object detection and the GOTURN tracking algorithms and switching to a camera if the bounding box sizes increased for 3 consecutive frames. It wasn’t working very well when I tested it though. When the object detection algorithm moved to color-based detection, I merged my feed selection code with the new object detection code and updated it to support 2 USB cameras instead of just 1 USB camera. This allowed me to make a testing setup that was close to what we would have for the interim demo, except the cameras were stationary instead of tracking the car as it moved. The resulting livestream feed was still unsatisfactory because it either seemed to switch randomly or not switch when it was supposed to, depending on the number of consecutive frames I set for a switch to occur. However it was still progress. Next I moved on to updating my feed selection algorithm to calculate a simple moving average over the past 3 frames for each camera, displaying the camera feed with the greatest moving average. I also added debug statements printing to a text file which allowed me to see what the bounding box sizes were for each camera for each frame, and the frame and time at which the camera feed being displayed was switched. This new algorithm for switching performed much better, being able to consistently capture the front of the car as it raced down the track with 2 cameras pointed in opposite directions at opposite corners of the rectangular track, which was a good improvement from the previous algorithm. However, when the cameras were brought into more overlapping fields of view over the track the switching appeared to switch too quickly and randomly, so there is more improvement needed.

I am on schedule, having a working algorithm for interim demo along with a testing framework that can help me improve the switching algorithm through looking at the logs and comparing them to test cases. A reach goal is for interim demo to show some of these test cases, but for now the focus is the standard rectangular track.

Next steps are to merge with the updated color detection algorithm and integrate the motor control code allowing the cameras to automatically track the cars. I need to identify specific track & camera configurations for testing next week that can show how our product meets its use case requirements. These testing configurations will also be important in helping me identify where I need to improve with regards to my simple moving average algorithm. I need to test integration with the motor control code and ensure the switching algorithm works consistently even with auto-tracking cameras that may sometimes have overlap in seeing the car.

Team Status Report for 3/30/24

Right now, the most significant risk that could jeopardize the success of the project is if our detection algorithm, which is currently done through color detection, is interfered by the environment. Bhavya is working hard on minimizing the interference of the environment after switching over to color detection from object detection(which took too long) but the algorithm is definitely not perfect yet. This is a risk because both the motor control and feed selection modules depend on the accuracy of the bounding boxes outputted by detection. So if detection is a little shaky, the motors could inaccurately pan and feed selection may be wrong. Bhavya is working on mitigating these risks as said in his status report. But the contingency plan is running the system with good lighting and at location/angle where the car along the track never overlaps with another object/background of the same color.

One of the changes we made was using a manual voltage supply to power the race track. We saw that the 9V power supply was providing an electromagnetic field too high, causing the cars to travel fast. By lowering the voltage supply to approximately 4V, the car moves at a slower speed, allowing our system to more easily test. We needed to make this change for now, but we hope to bump up the speed as fast as we can accurately track. No costs were made to our camera system due to this change, since it is a change within the track itself. Another change we made was utilizing simple moving average for feed selection. This means that we are not only changing feeds based on bounding box sizes, but the moving average of it with width of 3. This has been working when the cameras were placed in opposite corners pointing towards the car direction.

Now that we are at the integration period, schedule is up to date.

We didn’t take any convincing photos, but we are looking forward to showing our progress during interim demo next week:)

Jae’s Status Report for 3/30/24

This week, I mainly worked on interfacing with object detection module. Since my motor control module works on the Arduino side, I needed to find a way to take the bounding box of the detected object from the detection module and use it to tell the motors to pan left or right by how many degrees. For this, and for now, I am using a simple algorithm where I place an imaginary box in the middle of the screen and if the detected object’s center point is to the left or right of the imaginary box, the function sends a serial data to the Arduino telling it to pan left or right x number of degrees. Now the tricky part is the smoothening of the panning. The two factors that contribute most to making panning as best as possible are the imaginary box width and the number of degrees the motor should turn at a command. Currently, I am at 480 for width out of 640 of the whole frame and 7 degrees panning, since we expect the car to be moving when the camera is capturing it. I will do more testing to finalize these values.

Additionally, something I worked on this week was finding a way to slow down the car, as it was moving too fast for testing purposes. From professor Kim’s advice, we attempted to manually control the voltage supply of the track instead of plugging in directly through 9V adapter. I removed the batteries and adapter and took apart the powered area of the track. I soldered two wires to the ground and power ends and clipped it onto a dc power supply and it worked perfect. The voltage we are using for testing is set at 4V, which is significantly slower than before.

Camera assembly has also been wrapped up this week, as the screws came in finally. Although it functions, I will try to make it more stable in the next few weeks when I get the chance.

My progress is now up to schedule.

Next week, I hope to have an integrated,  demo-able system. Additionally, I wish to keep fine tuning the motor controls as well as stabilize the camera stands.

Thomas’s Status Report for 3/23/24

This week, I was able to work on a basic feed selection algorithm which switched to a new camera feed every time it saw consecutively increasing bounding box sizes for 3 frames. I have asked Bhavya to test it with his object detection code with his webcam and using his hand to move the car and I have also prepared it to be tested on a looping video with 2 perspectives of the car (from the bottom left corner and the bottom side) tomorrow. The number of frames that need to be seen consecutively increasing in size can be varied to experiment with what would make for the best switching. More advanced algorithms are still in the idea phase, one of which involves switching when the front of the car is identified in one of the frames using a custom trained cascade classifier. This is working towards the design document proposed approach which combines both of these metrics for switching. I have also tried out an idea that acts as a risk mitigation plan for the car’s speed as it races around the track, which is taping quarters to the car to slow it down. I was able to get 5 quarters on the car which did slow it down noticeably, taking at least a few more seconds to finish 10 laps around a small track. The drawback of this strategy is that it is a bit more likely for the car to run off the track during tight turns at high speeds, so that will be one challenge we will have to account for in our track design and crash system requirements if we end up needing this risk mitigation strategy for our project. We will also have to adjust our object detection algorithms if we use tape since the car looks different especially if we are custom training them.

My progress is behind schedule by a little bit since I’m interfacing with Bhavya’s CV code for the first time tomorrow, so hopefully that goes well. To stay on track I’ll need to have our code integrated fully tomorrow and then as it gets updated use Git source control to keep our code integrated as we both continue working on our sides. I’ll need to get the best performance out of the bounding box size algorithm by Monday ideally and the cascade classifier-based algorithm trained and tested a bit by Wednesday. By next Saturday I’ll need to have integrated these into as solid of an algorithm that I can get for the interim demo.

Jae’s Status Report for 3/23/24

I was unable to get much work done this week, due to illness. After I recovered later in the week, I had quite a lot of work to catch up on in my other classes, so unfortunately I had to compromise capstone at least for this week. What I was able to get done is finalize camera stand assembly by purchasing the last part needed to attach the cameras to the servos. I am able to extend the servo wires with some jumper wires so the camera stands should be good to place anywhere around the track. Autotracking code is completed at least on the Arduino side. I was supposed to work on the Python code that would convert the boundary box coordinates into commands that Arduino would understand.

I am behind on schedule, as I was supposed to finish autotracking code by this week. That will be my deliverable for next week, as well as the start of integration with my teammates.

Team Status Report for 3/16/24

The most significant risks that could jeopardize the success of the project are the object detection algorithm’s latency and the toy race car’s speed. The object detection algorithm is currently taking around 1 second to product the initial bounding box of the car for the tracking algorithm to use as its input, and this is a problem because we have 4 cameras that each cover a region of the track and they need to be able to get a bounding box on the car from the object detection algorithm anytime the car initially enters the region of the track they are responsible for.  We are working on managing this risk by identifying faster alternatives for the object detection algorithm we are currently using.  The toy race car currently travels at about 1.5 m/s and can go no slower without starting and stopping, and this is a problem because we’re not sure if the cameras will be able to track the car if it goes this fast past the camera, whether because the tracking algorithm can’t keep up, the motor positional feedback control can’t keep up, or the motor itself can’t keep up. We are working on managing this risk by testing out the maximum turning speed of our camera servos and whether that can keep up with a fast moving object at various distances from the track. We are working on a contingency plan for this risk by slowing down the car using weights, and will be testing out taping coins to the top of the car to slow it down.

No changes were made yet to the existing design of the system but the object detection algorithm may change in order to provide a faster initial object detection time in order to meet our system’s latency requirements. This change incurs some extra time working on the object detection before other parts of the project such as auto-tracking and auto-switching can move forward, and these costs will be mitigated moving forward by all team members pitching in to help.

An updated schedule pushes our system integration back a week as we are still working on Arduino interaction and interfacing in CV.

We have gotten our first steps working: object detection of the toy car, tracking of cars in video files, simultaneous control of the 4 servos, and switching between the 4 camera sources. Now we need to improve these parts to meet our project requirements then put these parts together.

Thomas’s Status Report for 3/16/24

This week, I wrote code to test the cameras we bought. The code used OpenCV in Python to capture frames from all 4 cameras simultaneously and displayed the frames from one of the cameras based on whether the last key pressed was 1, 2, 3, or 4. All 4 cameras were functional although we may need to order a spare in case one of them breaks. I also worked on psuedocode for updating the servo position based on the bounding boxes produced by the tracking algorithm. The pseudocode gets the x midpoint of the race car‘s bounding box and compares it to the x midpoint of the frame to decide the servo positional adjustment needed.

I am behind schedule when it comes to the feed selection/auto-switching algorithm development. This was discussed in our team meeting on Wednesday but I’m not entirely sure on how to start because I haven’t able to easily find previously existing work to start from. To catch up, I’ll have to ask for help on the feasibility of some of my ideas from course staff and ask my teammates for help testing the ideas out on our actual hardware. Next week I hope to have tested an idea out on our actual hardware that actually produces a stream that somewhat matches our use case requirements.

Jae’s Status Report for 3/16/24

This week, I got a good amount of work done. First, I wrapped up the arduino code and was able to control four motors given a string of the format “[motorID1]:[direction1]:[degrees1]&[motorID2]:[direction2]:[degrees2]&…”. I tested the serial communication between python code that simulated Bhav’s end, and was able to simultaneously control four motors given these commands. I am unfortunately missing a few small screws to attach the cameras to the servo brackets, but I did attach the servo brackets to the servos. To finalize building the camera stands, I will be looking for these screws, as well as some form of base to stabilize the motors.

I am mostly back on schedule. With the Arduino code finished, the only part I am behind on is building the camera stand, but that is in progress.

Next week, I hope to have the camera stands finished. I also will try to work on integrating this code with OpenCV’s boundary box output instead of my simulated python code.

(sorry for the blurry code, I can’t seem to find a workaround.)

 

Thomas’s Status Report for 3/9/24

This week on the project I wrote the design requirement, the implementation plan for the feed selection subsystem, the block diagram for the feed selection subsystem, the risk mitigation plans, the related work, the glossary of acronyms, and the index terms for the design document.

My progress is currently on schedule as there was no scheduled work over the break.

Next week I hope to complete working code for live autonomous switching of camera feeds to display  as a subcomponent for the project subsystem I am working on.

Team Status Report for 3/9/24

As we finalized our design through working on the design doc, the biggest risk of our project is the speed of the car affecting our ability to track it. As of now, we have not done too much on track testing to see if our tracking latency is too slow for a close up camera locations, but our contingency plan is to simply place the cameras further away from the track as needed, to reduce the amount of rotation the camera has to make to track the car.

We made a big switch to change our project to this car tracking one. So in terms of design, we weren’t changing much, just brainstorming for the first time. Through working on the design doc, we were able to come up with challenges we may face, especially in our own parts (hardware, tracking, feed selection). Although we are very late in our idea decision, we are glad we switched as it will be a busy but better experience for all of us.

Our most recently updated schedule is the same as the one submitted on the design document.

Part A was written by Jae Song

On the global scale, our Cameraman product will offer a better approach in live streaming car racing. On the safety side of capturing footage, auto-tracking cameras will hopefully be implemented globally so that no lives are under danger. Car racing is a global sport so our product will definitely have an impact. Not only is global safety being addressed, but we hope that our auto-generation of stream will capture races in a more timely and accurate manner, to enhance the experience of the global audience.

Part B was written by Thomas Li

The multi-camera live-streaming system we design seeks to meet the cultural needs of today’s car sports enthusiasts, specifically F1 racing culture. Primarily, we do this by  providing the viewing experience already established among fans as optimal through algorithmic feed selection. There is already a culture established by race casters, real cameramen, racers, and viewers that we will analyze to determine this optimal viewing experience.

Part C was written by Bhavya Jain

Given our system of cameras closely mirrors that which already exists (with the addition of automated movement, and software for automated feed construction) there seem to be no glaring environmental factors that we feel we must consider. The movement of the cameras will consume some power that was previously a physical force and overall running of the system will consume some power. Motor racing is a very demanding sport in terms of material and power requirements, but our system does not add any significant pressure to these requirements.