Jae’s Status Report for 3/23/24

I was unable to get much work done this week, due to illness. After I recovered later in the week, I had quite a lot of work to catch up on in my other classes, so unfortunately I had to compromise capstone at least for this week. What I was able to get done is finalize camera stand assembly by purchasing the last part needed to attach the cameras to the servos. I am able to extend the servo wires with some jumper wires so the camera stands should be good to place anywhere around the track. Autotracking code is completed at least on the Arduino side. I was supposed to work on the Python code that would convert the boundary box coordinates into commands that Arduino would understand.

I am behind on schedule, as I was supposed to finish autotracking code by this week. That will be my deliverable for next week, as well as the start of integration with my teammates.

Team Status Report for 3/16/24

The most significant risks that could jeopardize the success of the project are the object detection algorithm’s latency and the toy race car’s speed. The object detection algorithm is currently taking around 1 second to product the initial bounding box of the car for the tracking algorithm to use as its input, and this is a problem because we have 4 cameras that each cover a region of the track and they need to be able to get a bounding box on the car from the object detection algorithm anytime the car initially enters the region of the track they are responsible for.  We are working on managing this risk by identifying faster alternatives for the object detection algorithm we are currently using.  The toy race car currently travels at about 1.5 m/s and can go no slower without starting and stopping, and this is a problem because we’re not sure if the cameras will be able to track the car if it goes this fast past the camera, whether because the tracking algorithm can’t keep up, the motor positional feedback control can’t keep up, or the motor itself can’t keep up. We are working on managing this risk by testing out the maximum turning speed of our camera servos and whether that can keep up with a fast moving object at various distances from the track. We are working on a contingency plan for this risk by slowing down the car using weights, and will be testing out taping coins to the top of the car to slow it down.

No changes were made yet to the existing design of the system but the object detection algorithm may change in order to provide a faster initial object detection time in order to meet our system’s latency requirements. This change incurs some extra time working on the object detection before other parts of the project such as auto-tracking and auto-switching can move forward, and these costs will be mitigated moving forward by all team members pitching in to help.

An updated schedule pushes our system integration back a week as we are still working on Arduino interaction and interfacing in CV.

We have gotten our first steps working: object detection of the toy car, tracking of cars in video files, simultaneous control of the 4 servos, and switching between the 4 camera sources. Now we need to improve these parts to meet our project requirements then put these parts together.

Thomas’s Status Report for 3/16/24

This week, I wrote code to test the cameras we bought. The code used OpenCV in Python to capture frames from all 4 cameras simultaneously and displayed the frames from one of the cameras based on whether the last key pressed was 1, 2, 3, or 4. All 4 cameras were functional although we may need to order a spare in case one of them breaks. I also worked on psuedocode for updating the servo position based on the bounding boxes produced by the tracking algorithm. The pseudocode gets the x midpoint of the race car‘s bounding box and compares it to the x midpoint of the frame to decide the servo positional adjustment needed.

I am behind schedule when it comes to the feed selection/auto-switching algorithm development. This was discussed in our team meeting on Wednesday but I’m not entirely sure on how to start because I haven’t able to easily find previously existing work to start from. To catch up, I’ll have to ask for help on the feasibility of some of my ideas from course staff and ask my teammates for help testing the ideas out on our actual hardware. Next week I hope to have tested an idea out on our actual hardware that actually produces a stream that somewhat matches our use case requirements.

Bhavya’s Status Report for 3/16/24

I finished creating the detection algorithm. Instead of using the yOLOv4, I used the R-CNN instead. R-CNN typically provides better accuracy by employing region-based convolutional neural networks, which allow for more precise localization of objects in images, albeit at the cost of increased computational complexity during inference. The bounding boxes I was able to create were highly accurate but took a long time to be produced. Ran tests for the detection using static images of the slot car from various angles and distances. Then I integrated the pre-processing, detection, and tracking. Ran tests on a video of the slot car. Currently, the detection algorithm might be too slow, and the preprocessing needs to be tuned after testing which configuration works best for latency. I also have actual footage of the toy car on the track that I will be testing my algorithm on now.

For next week, I will be integrating the code with the live stream offered by our cameras and relaying panning instructions to the motors. Further testing on what type of tracker/detection will be best for our use case will be required.

Given the amount of fine-tuning our system will require for the live stream to be watchable I think we are slightly behind schedule. Sufficient testing in the following week should help put us back on track

Jae’s Status Report for 3/16/24

This week, I got a good amount of work done. First, I wrapped up the arduino code and was able to control four motors given a string of the format “[motorID1]:[direction1]:[degrees1]&[motorID2]:[direction2]:[degrees2]&…”. I tested the serial communication between python code that simulated Bhav’s end, and was able to simultaneously control four motors given these commands. I am unfortunately missing a few small screws to attach the cameras to the servo brackets, but I did attach the servo brackets to the servos. To finalize building the camera stands, I will be looking for these screws, as well as some form of base to stabilize the motors.

I am mostly back on schedule. With the Arduino code finished, the only part I am behind on is building the camera stand, but that is in progress.

Next week, I hope to have the camera stands finished. I also will try to work on integrating this code with OpenCV’s boundary box output instead of my simulated python code.

(sorry for the blurry code, I can’t seem to find a workaround.)

 

Thomas’s Status Report for 3/9/24

This week on the project I wrote the design requirement, the implementation plan for the feed selection subsystem, the block diagram for the feed selection subsystem, the risk mitigation plans, the related work, the glossary of acronyms, and the index terms for the design document.

My progress is currently on schedule as there was no scheduled work over the break.

Next week I hope to complete working code for live autonomous switching of camera feeds to display  as a subcomponent for the project subsystem I am working on.

Bhavya’s Status Report for 03/09/2024

After switching to the F1 track camera idea (post the design presentation) the team had to scramble to establish and work on this new idea.

I started the week off by writing the object tracking algorithm using GOTURN. I also tested several image preprocessing strategies that could possibly reduce the latency of the tracking system. I was able to achieve a preliminary tracking algorithm.

The next major task was the design document submission. I was in charge of the trade studies on computer vision strategies, implementation details of detection and tracking systems, and the outline of the testing and verification in accordance with the use case requirements. We split up the work and conducted peer reviews before finalizing the document.

Over the break, I have been working on the detection algorithm, integrating it with the tracking algorithm, and testing it on the actual slot racing car that we plan to use for the demonstration. I hope to integrate it with the camera to have a one-camera system ready by Wednesday and start arranging the muti-camera system by the end of this week.

Team Status Report for 3/9/24

As we finalized our design through working on the design doc, the biggest risk of our project is the speed of the car affecting our ability to track it. As of now, we have not done too much on track testing to see if our tracking latency is too slow for a close up camera locations, but our contingency plan is to simply place the cameras further away from the track as needed, to reduce the amount of rotation the camera has to make to track the car.

We made a big switch to change our project to this car tracking one. So in terms of design, we weren’t changing much, just brainstorming for the first time. Through working on the design doc, we were able to come up with challenges we may face, especially in our own parts (hardware, tracking, feed selection). Although we are very late in our idea decision, we are glad we switched as it will be a busy but better experience for all of us.

Our most recently updated schedule is the same as the one submitted on the design document.

Part A was written by Jae Song

On the global scale, our Cameraman product will offer a better approach in live streaming car racing. On the safety side of capturing footage, auto-tracking cameras will hopefully be implemented globally so that no lives are under danger. Car racing is a global sport so our product will definitely have an impact. Not only is global safety being addressed, but we hope that our auto-generation of stream will capture races in a more timely and accurate manner, to enhance the experience of the global audience.

Part B was written by Thomas Li

The multi-camera live-streaming system we design seeks to meet the cultural needs of today’s car sports enthusiasts, specifically F1 racing culture. Primarily, we do this by  providing the viewing experience already established among fans as optimal through algorithmic feed selection. There is already a culture established by race casters, real cameramen, racers, and viewers that we will analyze to determine this optimal viewing experience.

Part C was written by Bhavya Jain

Given our system of cameras closely mirrors that which already exists (with the addition of automated movement, and software for automated feed construction) there seem to be no glaring environmental factors that we feel we must consider. The movement of the cameras will consume some power that was previously a physical force and overall running of the system will consume some power. Motor racing is a very demanding sport in terms of material and power requirements, but our system does not add any significant pressure to these requirements.

Jae’s Status Report for 3/9/24

Most of last week was spent on working on the design document. First, since we switched topics a week before the design doc was due, I came up with new use-case requirements and met up with our TA to finalize them. She also helped me draft some good design requirements that I hadn’t thought of before. While working on the design doc, we were able to make good progress in finalizing our design for the project. I worked on abstract, use-case requirements, architecture, design trade studies (hardware), system implementation (hardware), summary, team member responsibilities, and reach goals.

After design document was submitted, I spent some of the spring break on Arduino code. Specifically, the communication between the object tracking module of OpenCV and the motor control through Arduino. I found a way to use serial monitor of Arduino to serially send and receive data from the PC. I am able to control the movement of the motors with a simple python code. Although the interface seems to be working, I have not tested it yet with Bhav’s object tracking module so that is my next task in hand.

On schedule, my progress is slightly off. I was supposed to work on camera distance and location determination, but instead I worked on the code which is next week’s task. So I’m not too behind on schedule.

Next week, I hope to have the code finished, determine camera stand locations, and start assembling the camera stands.

Thomas’s Status Report for 2/24/24

This week, I presented on behalf of the group for the design review. However, we ended up pivoting away from the pool referee system after the presentation when we got the news that it would be too hard to do with 260 fps cameras. Following this development, I started working on the design requirements for our new project which focuses on tracking a racecar with cameras and auto-generating a livestream from the multiple camera feeds. I wrote around 4 design requirements for each use-case requirement of the new racecar camera livestream project. These design requirements together specify the system at an abstract level and also help us choose a solution approach with the appropriate constraints in mind. One thing I focused in on was requirements for the camera motors, since this will help us in our decision between whether we choose to use servo motors or stepper motors. The motors need to turn smoothly for a good livestream experience, but they also need to be able to start quickly and stop accurately to account for sudden acceleration or deceleration from the racecar.

We are behind schedule since we’ve pivoted to the camera racecar livestream project idea. To catch up to the project schedule, we’ll have to work in parallel on the design document, including the design requirements, solution approach, and testing, while also ordering the parts and doing trade studies to begin the process of actually implementing the system. In the next week, I hope to have trade studies for at least the four major components of our system, leading up to a finalized design document.