Jae’s Status Report for 4/27/24

This week, I helped with testing and integration. Earlier in the week, we prepared a demo video to show in class for final presentation. I helped in setting up the system and running tests to capture a good stream with accurate tracking and feed switching. Later in the week, I helped Thomas with his feed selection algorithm debugging. As he stated in his report, there were multiple issues we had to resolve to get the system functioning. Although he worked on the majority of the code, I helped in whatever way I can to test and analyze the results.

Currently, I am on schedule as I am pretty much done with tracking algorithm. This coming week, I hope to refine tracking and even implement motor panning back to default angle once car leaves its frame. We hope to wrap up system implementation by around Tuesday, so we can spend the rest of the time on testing and working on video, poster, demo, and report. These are the deliverables I hope to achieve.

Team Status Report for 4/20/24

The most significant risk that we need to manage is how late our feed switches. Although we have an algorithm that tries to switch early, our detection picks up the car late, which means our switching is always when the car passes and not before. We are trying to manage this risk by implementing a new algorithm that utilizes a history model of previous camera switching. Then, we are able to remember which camera to switch to, and is able to switch even before the camera detects a car.

This is the change in our design. We are currently in the process of getting it to work, and seeing how stable it will be. The only cost right now is time and perhaps a buggier feed switching, but if it works, we believe it will make our feed switching much better.

We are on track in our schedule. I think we should be testing right now, but I guess we are testing while implementing this new algorithm.

Jae’s Status Report for 4/20/24

I accidentally did status report 9 last week thinking it was due then. So I will update the progress of the project, but please refer to that one for the part asking for new tools/knowledge I needed.

In terms of our project progress, we were able to scale the system up to four cameras. I ensured that motor tracking worked for all four cameras. Afterwards, I provided support for feed selection testing, as we are trying out a new algorithm.

I am currently on schedule, as we are on the integration/testing period of the project.

Next week, I hope to help my team with testing. Right now we are just working feed selection algorithm, so hopefully I can provide some helpful comments and help test.

Team Status Report for 4/13/24

The most significant risk that could jeopardize our project currently is still color detection failures. This wasn’t able to be resolved yet. To mitigate this, we’ve been trying to work in good lighting and selecting our object of interest very carefully to get the correct colors. Another risk that we need to manage is scaling up our system. As we include more cameras, our feed selection algorithm is prone to be more difficult. The increase in overlap of the footages may mean that there are more options for feed selection. To mitigate this, we are currently in progress of incorporating the rate of change of bounding box sizes to our algorithm. We also wish to look into direction findings of the car.

We finally changed our track to the one we will be demoing. We wanted to incorporate a loop and an elevation change. We think this track would be simple, yet very telling of the capability of our system. We plan to use four cameras and locate them as shown on the picture. Other than this, our design is pretty much the same, just scaled up

Our schedule has not changed.

Jae’s Status Report for 4/13/24

This week, I spent some time discovering what track configuration would be best suited for our final demo. It took some trial and error but we wanted to include a loop of some sort, which means there has to be track elevation as well. We decided on the one I included a picture of in the team report. Additionally, I spent most of the week scaling the object tracking code to take in 3 camera inputs and control 3 motors. The final system we hope to have must have 4 cameras and motors, so I need to scale it up one more. I was also able to replace the multi-connected jumper wires with single long ones to clean it up. This took a good amount of time… since these have to run the length of the track for each motor. However, I haven’t changed the content of the code yet to make the tracking smoother, and this is something I plan to do this weekend.

My progress is somewhat on schedule. I think next week will look a bit busy, but I think it is doable.

I wish to scale up the system to 4 cameras/motors and basically have the final demo running, although a bit buggy.

I haven’t really used too many new tools. Most of the work I had to do was simple in the way that I was able to do it using my own knowledge. Like arduino code, python, servo library, soldering, … Actually, I think one thing I found helpful that I’m embarrassed to say I didn’t fully know before was git. I was able to really understand how branches worked and how to merge correctly. This took up a good amount of time in our integration, but I’m glad to have learned how to use git.

Jae’s Status Report for 4/6/24

Given that this week was the interim demo, the days leading up to it was pretty packed with work. A lot of this has been tweaking numbers, setting up the environment, and debugging. That said, there is not much to show in terms of progress. What I personally worked on to debug was preventing the tracking algorithm from being distracted by buggy detection. Our current color detection is not the best, especially in bad lighting conditions. An ideal detection would output a bounding box for every couple frames of the feed, but our current detection outputs maybe 3-5 accurate boxes per lap. This meant that the motors had to be stable through wrong bounding boxes as well as infrequent boxes. To disallow the effects of a wrong bounding box, I set the code so that motors are only controlled when bounding boxes are within certain dimensions and locations. Because bounding boxes were not as frequent as desired, I also set the motor angle to pan multiple degrees when detecting a bounding box outside of the center of the frame. In an ideal world, this would be minimal degrees because we would have frequent boxes to keep updating.

My progress is on schedule. I have finished up the tracking task for the most part and have started debugging on integration and fine tuning the controls.

Next week, I hope to start working on tracking on the final demo track configuration.

Team Status Report for 3/30/24

Right now, the most significant risk that could jeopardize the success of the project is if our detection algorithm, which is currently done through color detection, is interfered by the environment. Bhavya is working hard on minimizing the interference of the environment after switching over to color detection from object detection(which took too long) but the algorithm is definitely not perfect yet. This is a risk because both the motor control and feed selection modules depend on the accuracy of the bounding boxes outputted by detection. So if detection is a little shaky, the motors could inaccurately pan and feed selection may be wrong. Bhavya is working on mitigating these risks as said in his status report. But the contingency plan is running the system with good lighting and at location/angle where the car along the track never overlaps with another object/background of the same color.

One of the changes we made was using a manual voltage supply to power the race track. We saw that the 9V power supply was providing an electromagnetic field too high, causing the cars to travel fast. By lowering the voltage supply to approximately 4V, the car moves at a slower speed, allowing our system to more easily test. We needed to make this change for now, but we hope to bump up the speed as fast as we can accurately track. No costs were made to our camera system due to this change, since it is a change within the track itself. Another change we made was utilizing simple moving average for feed selection. This means that we are not only changing feeds based on bounding box sizes, but the moving average of it with width of 3. This has been working when the cameras were placed in opposite corners pointing towards the car direction.

Now that we are at the integration period, schedule is up to date.

We didn’t take any convincing photos, but we are looking forward to showing our progress during interim demo next week:)

Jae’s Status Report for 3/30/24

This week, I mainly worked on interfacing with object detection module. Since my motor control module works on the Arduino side, I needed to find a way to take the bounding box of the detected object from the detection module and use it to tell the motors to pan left or right by how many degrees. For this, and for now, I am using a simple algorithm where I place an imaginary box in the middle of the screen and if the detected object’s center point is to the left or right of the imaginary box, the function sends a serial data to the Arduino telling it to pan left or right x number of degrees. Now the tricky part is the smoothening of the panning. The two factors that contribute most to making panning as best as possible are the imaginary box width and the number of degrees the motor should turn at a command. Currently, I am at 480 for width out of 640 of the whole frame and 7 degrees panning, since we expect the car to be moving when the camera is capturing it. I will do more testing to finalize these values.

Additionally, something I worked on this week was finding a way to slow down the car, as it was moving too fast for testing purposes. From professor Kim’s advice, we attempted to manually control the voltage supply of the track instead of plugging in directly through 9V adapter. I removed the batteries and adapter and took apart the powered area of the track. I soldered two wires to the ground and power ends and clipped it onto a dc power supply and it worked perfect. The voltage we are using for testing is set at 4V, which is significantly slower than before.

Camera assembly has also been wrapped up this week, as the screws came in finally. Although it functions, I will try to make it more stable in the next few weeks when I get the chance.

My progress is now up to schedule.

Next week, I hope to have an integrated,  demo-able system. Additionally, I wish to keep fine tuning the motor controls as well as stabilize the camera stands.

Jae’s Status Report for 3/23/24

I was unable to get much work done this week, due to illness. After I recovered later in the week, I had quite a lot of work to catch up on in my other classes, so unfortunately I had to compromise capstone at least for this week. What I was able to get done is finalize camera stand assembly by purchasing the last part needed to attach the cameras to the servos. I am able to extend the servo wires with some jumper wires so the camera stands should be good to place anywhere around the track. Autotracking code is completed at least on the Arduino side. I was supposed to work on the Python code that would convert the boundary box coordinates into commands that Arduino would understand.

I am behind on schedule, as I was supposed to finish autotracking code by this week. That will be my deliverable for next week, as well as the start of integration with my teammates.

Jae’s Status Report for 3/16/24

This week, I got a good amount of work done. First, I wrapped up the arduino code and was able to control four motors given a string of the format “[motorID1]:[direction1]:[degrees1]&[motorID2]:[direction2]:[degrees2]&…”. I tested the serial communication between python code that simulated Bhav’s end, and was able to simultaneously control four motors given these commands. I am unfortunately missing a few small screws to attach the cameras to the servo brackets, but I did attach the servo brackets to the servos. To finalize building the camera stands, I will be looking for these screws, as well as some form of base to stabilize the motors.

I am mostly back on schedule. With the Arduino code finished, the only part I am behind on is building the camera stand, but that is in progress.

Next week, I hope to have the camera stands finished. I also will try to work on integrating this code with OpenCV’s boundary box output instead of my simulated python code.

(sorry for the blurry code, I can’t seem to find a workaround.)