Jae’s Status Report for 4/27/24

This week, I helped with testing and integration. Earlier in the week, we prepared a demo video to show in class for final presentation. I helped in setting up the system and running tests to capture a good stream with accurate tracking and feed switching. Later in the week, I helped Thomas with his feed selection algorithm debugging. As he stated in his report, there were multiple issues we had to resolve to get the system functioning. Although he worked on the majority of the code, I helped in whatever way I can to test and analyze the results.

Currently, I am on schedule as I am pretty much done with tracking algorithm. This coming week, I hope to refine tracking and even implement motor panning back to default angle once car leaves its frame. We hope to wrap up system implementation by around Tuesday, so we can spend the rest of the time on testing and working on video, poster, demo, and report. These are the deliverables I hope to achieve.

Jae’s Status Report for 4/20/24

I accidentally did status report 9 last week thinking it was due then. So I will update the progress of the project, but please refer to that one for the part asking for new tools/knowledge I needed.

In terms of our project progress, we were able to scale the system up to four cameras. I ensured that motor tracking worked for all four cameras. Afterwards, I provided support for feed selection testing, as we are trying out a new algorithm.

I am currently on schedule, as we are on the integration/testing period of the project.

Next week, I hope to help my team with testing. Right now we are just working feed selection algorithm, so hopefully I can provide some helpful comments and help test.

Jae’s Status Report for 4/13/24

This week, I spent some time discovering what track configuration would be best suited for our final demo. It took some trial and error but we wanted to include a loop of some sort, which means there has to be track elevation as well. We decided on the one I included a picture of in the team report. Additionally, I spent most of the week scaling the object tracking code to take in 3 camera inputs and control 3 motors. The final system we hope to have must have 4 cameras and motors, so I need to scale it up one more. I was also able to replace the multi-connected jumper wires with single long ones to clean it up. This took a good amount of time… since these have to run the length of the track for each motor. However, I haven’t changed the content of the code yet to make the tracking smoother, and this is something I plan to do this weekend.

My progress is somewhat on schedule. I think next week will look a bit busy, but I think it is doable.

I wish to scale up the system to 4 cameras/motors and basically have the final demo running, although a bit buggy.

I haven’t really used too many new tools. Most of the work I had to do was simple in the way that I was able to do it using my own knowledge. Like arduino code, python, servo library, soldering, … Actually, I think one thing I found helpful that I’m embarrassed to say I didn’t fully know before was git. I was able to really understand how branches worked and how to merge correctly. This took up a good amount of time in our integration, but I’m glad to have learned how to use git.

Jae’s Status Report for 4/6/24

Given that this week was the interim demo, the days leading up to it was pretty packed with work. A lot of this has been tweaking numbers, setting up the environment, and debugging. That said, there is not much to show in terms of progress. What I personally worked on to debug was preventing the tracking algorithm from being distracted by buggy detection. Our current color detection is not the best, especially in bad lighting conditions. An ideal detection would output a bounding box for every couple frames of the feed, but our current detection outputs maybe 3-5 accurate boxes per lap. This meant that the motors had to be stable through wrong bounding boxes as well as infrequent boxes. To disallow the effects of a wrong bounding box, I set the code so that motors are only controlled when bounding boxes are within certain dimensions and locations. Because bounding boxes were not as frequent as desired, I also set the motor angle to pan multiple degrees when detecting a bounding box outside of the center of the frame. In an ideal world, this would be minimal degrees because we would have frequent boxes to keep updating.

My progress is on schedule. I have finished up the tracking task for the most part and have started debugging on integration and fine tuning the controls.

Next week, I hope to start working on tracking on the final demo track configuration.

Jae’s Status Report for 3/30/24

This week, I mainly worked on interfacing with object detection module. Since my motor control module works on the Arduino side, I needed to find a way to take the bounding box of the detected object from the detection module and use it to tell the motors to pan left or right by how many degrees. For this, and for now, I am using a simple algorithm where I place an imaginary box in the middle of the screen and if the detected object’s center point is to the left or right of the imaginary box, the function sends a serial data to the Arduino telling it to pan left or right x number of degrees. Now the tricky part is the smoothening of the panning. The two factors that contribute most to making panning as best as possible are the imaginary box width and the number of degrees the motor should turn at a command. Currently, I am at 480 for width out of 640 of the whole frame and 7 degrees panning, since we expect the car to be moving when the camera is capturing it. I will do more testing to finalize these values.

Additionally, something I worked on this week was finding a way to slow down the car, as it was moving too fast for testing purposes. From professor Kim’s advice, we attempted to manually control the voltage supply of the track instead of plugging in directly through 9V adapter. I removed the batteries and adapter and took apart the powered area of the track. I soldered two wires to the ground and power ends and clipped it onto a dc power supply and it worked perfect. The voltage we are using for testing is set at 4V, which is significantly slower than before.

Camera assembly has also been wrapped up this week, as the screws came in finally. Although it functions, I will try to make it more stable in the next few weeks when I get the chance.

My progress is now up to schedule.

Next week, I hope to have an integrated,  demo-able system. Additionally, I wish to keep fine tuning the motor controls as well as stabilize the camera stands.

Jae’s Status Report for 3/23/24

I was unable to get much work done this week, due to illness. After I recovered later in the week, I had quite a lot of work to catch up on in my other classes, so unfortunately I had to compromise capstone at least for this week. What I was able to get done is finalize camera stand assembly by purchasing the last part needed to attach the cameras to the servos. I am able to extend the servo wires with some jumper wires so the camera stands should be good to place anywhere around the track. Autotracking code is completed at least on the Arduino side. I was supposed to work on the Python code that would convert the boundary box coordinates into commands that Arduino would understand.

I am behind on schedule, as I was supposed to finish autotracking code by this week. That will be my deliverable for next week, as well as the start of integration with my teammates.

Jae’s Status Report for 3/16/24

This week, I got a good amount of work done. First, I wrapped up the arduino code and was able to control four motors given a string of the format “[motorID1]:[direction1]:[degrees1]&[motorID2]:[direction2]:[degrees2]&…”. I tested the serial communication between python code that simulated Bhav’s end, and was able to simultaneously control four motors given these commands. I am unfortunately missing a few small screws to attach the cameras to the servo brackets, but I did attach the servo brackets to the servos. To finalize building the camera stands, I will be looking for these screws, as well as some form of base to stabilize the motors.

I am mostly back on schedule. With the Arduino code finished, the only part I am behind on is building the camera stand, but that is in progress.

Next week, I hope to have the camera stands finished. I also will try to work on integrating this code with OpenCV’s boundary box output instead of my simulated python code.

(sorry for the blurry code, I can’t seem to find a workaround.)

 

Jae’s Status Report for 3/9/24

Most of last week was spent on working on the design document. First, since we switched topics a week before the design doc was due, I came up with new use-case requirements and met up with our TA to finalize them. She also helped me draft some good design requirements that I hadn’t thought of before. While working on the design doc, we were able to make good progress in finalizing our design for the project. I worked on abstract, use-case requirements, architecture, design trade studies (hardware), system implementation (hardware), summary, team member responsibilities, and reach goals.

After design document was submitted, I spent some of the spring break on Arduino code. Specifically, the communication between the object tracking module of OpenCV and the motor control through Arduino. I found a way to use serial monitor of Arduino to serially send and receive data from the PC. I am able to control the movement of the motors with a simple python code. Although the interface seems to be working, I have not tested it yet with Bhav’s object tracking module so that is my next task in hand.

On schedule, my progress is slightly off. I was supposed to work on camera distance and location determination, but instead I worked on the code which is next week’s task. So I’m not too behind on schedule.

Next week, I hope to have the code finished, determine camera stand locations, and start assembling the camera stands.

Jae’s Status Report for 2/24/24

This week, our team was able to switch/finalize our idea from pool refereeing to car tracking and generation of stream. Because of this, I was able to put in orders for the things we needed the most at this time, which were the racetrack, arduino, and motors. We also need cameras soon, but for now we are planning on using the camera I bought to test pool double hits. I personally did research on the hardware side, so why arduino uno, why servo motors over stepper motors, and the interfaces between hardwares. I have not started thinking about how to mount the camera stands, but I plan on using some sort of stationary object and attaching a motor onto it, and then attaching the camera onto the rotor. Our group met/talked almost daily to get each part of the design report somewhat thought out, so I personally focused on the system implementation and design studies on the hardware side, as well as testing/verification/validation. We are planning on meeting tomorrow to join these thoughts together and have something to present the TA and faculty. We were unable to make a new schedule yet, so we will do that tomorrow as well.

We are without schedule at the moment, but we know we are quite behind. We hope that finalizing the idea will give us the boost needed to actually start working on the project. We will also be utilizing spring break and some of the slack time to catch up. Personally, to catch up, I will get working on some sort of arduino code to control the motors at fine gradients when the motor/arduino arrive. After that, I will start designing the camera stands.

Next week, I hope to have some form of code ready that allows the arduino to control the camera motor movements given some input from car detection/tracking module. Also I hope to start designing the camera stand, or at least have an idea of what it would look like and how I would build it. Lastly, we will work on the design report all of next week.

Jae’s Status Report for 2/17/24

After receiving feedback from the previous week’s proposal, our team took a step backwards to redefine our idea this week. Earlier this week, I communicated with our faculty and TA to clarify their feedback. After validating their feedback and deciding to change ideas, I took charge of fleshing out our backup idea (use-case/challenges/solution/testing) of a camera system that does auto-tracking and auto-generation of stream of a toy car race. I also helped clean up our list of possible fouls to detect for our main idea.

After presenting everything to the staff on Wednesday and part of Thursday, we decided on pursuing the detection of double-hit fouls and push fouls in pool. Then the biggest challenge was determining what the required camera resolution would be to capture a double hit. So I spent some time in the University Center game room performing double-hits and recording them as iphone video(~30fps) or slowmo(~180fps) to see what the images would look like frame by frame. Attached is the frame by frame images (~180fps) of a double-hit.

 

Since there were only 1-2 frames between the two hits, and that the iphone slowmo feature is not quite 200fps, we wanted to see what 260fps footage would look like, so I purchased a 260fps camera. It came today so I will be trying it out tomorrow.

Our schedule has been delayed since our proposal week, so I hope to get back on track by testing out the 260fps camera and: 1. if camera footage is good, team can move forward to rely more on camera footage 2. if camera footage is not quite good, team can look more into using other aspects of the hit to determine if it was a foul or not. I will also be ordering the pool table and cameras this coming week.

Next week, I hope to order the parts needed to start obtaining footage of double hits. I also hope to have a plan to use frame by frame images to detect double hits using collisions as well as distanced traveled by the balls.