Month: April 2024

Kaitlyn’s Status Report for 4/27/24

Kaitlyn’s Status Report for 4/27/24

Work Done This week I spent a lot of time preparing the Final Presentation slides. I modified the slides to be more traffic light themed and also reorganized the structure of our slides to be more cohesive and based on subsystems to reduce redundancy. The 

Team Status Report 4/27/24

Team Status Report 4/27/24

Potential Risks and Mitigation Strategies At this point, really the only issues we could potentially encounter are if the fixed version of the PCBs don’t actually work as intended and if the overall latency for the integrated system is higher than we had planned for 

Ankita’s Status Report 4/27/24

Ankita’s Status Report 4/27/24

Work Done

This week, I spent most of the time conducting initial tests (for latency and accuracy of the object detection model, which I did by evaluating the performance of the model on 3 different videos of the intersection and ~100 frames) and working on the final presentation. Due to work from other classes I wasn’t able to spend much time working on the project itself, but I found earlier in the week that unfortunately the code I wrote that added threading to handle delays in frame processing without lagging the video crashes quite often when run on the Raspberry Pi (probably due to the lower compute power.) I might try some other approaches using multiple processes instead of multiple threads, but if that doesn’t end up working I may have to demo that code on my personal computer instead of the Pi (since we’ll be demoing the simulation/optimization algorithm on the Pi anyways.)

Schedule

For now, what I have left to do is record the updated videos of the intersection and determine the lane boundaries for those so that we have a clean demo of the object detection. After that, I will help my teammates with integration once the new PCB comes in and work on the final poster, video, and report.

Deliverables

As above.

Zina’s Status Report for 4/20/24

Zina’s Status Report for 4/20/24

Since my last status report, I made a lot of progress on my portion of the project, but also encountered some unexpected challenges. I tested out the PCBs that we received by hooking up some LEDs to a breadboarded circuit and ran the Arduino code 

Team Status Report for 4/20/24

Team Status Report for 4/20/24

Potential Risks and Mitigation Strategies The concurrent videos we took at the intersection this week involve a lot of hand-shake, so hardcoded lane boundaries are not an option. Currently, code that detects the lane boundaries using Canny edge detection is not working, so we ordered 

Ankita’s Status Report for 4/20/24

Ankita’s Status Report for 4/20/24

Work Done

I ordered the new IP camera/portable batteries and they came in this week, so after getting the demo ready for pre-recorded videos at the intersection I will test the object detection model on the live video feed (after looking at tutorials/examples online that have used this exact camera and been able to access the RTSP URL, I think this will be possible.)

Kaitlyn and I also took concurrent video of all 4 sides of the intersection with the help of some friends, but the problem is that the videos are all quite shaky and thus clear lane boundaries cannot be defined. I tried to use Canny edge detection and Hough transforms to detect them, but due to the fact that cars often pass through the camera’s field of view and block the lanes from being shown in the video the code does not work reliably at the moment. I ordered some tripods that should ideally keep our phones stable when filming; we will re-record the videos sometime next week (reasonably before the demo) and if I can’t get the lane detection code working in time I will just hardcode the lane boundaries as I did with one of Zina’s videos that was taken with a Go-Pro stabilized with a tripod.

As expected, the object detection model runs much slower on the Raspberry Pi than it does on my computer (4 s compared to 0.4 s) but it is still within our defined latency requirements for the object detection model so it should be fine. I refactored the code so that the video continues to run at the expected frame rate as a frame is being processed, so that it grabs the next available frame of the video to process rather than grabbing the sequential next frame (and slowing down the video play considerably.) For example, before, the code would process the first frame of video (and take 4 seconds), then process the second frame, and so on (so the video would be slowed down drastically.) Now, the code will process each frame of the video in the background as I continue to iterate through the video’s frames in the foreground, and grab the next available frame to process from the foreground (so the video will continue to play at its correct speed.)

I also wrote code for the RPi to send the Arduino/PCB the current light state over a serial connection. Zina, Kaitlyn, and I tested it this week, and while the PCB needs to be modified slightly, the Arduino is decoding the light states properly and the code on the RPi is ready to go.

Schedule

We need to prepare our final presentation, so that is what we will be working on this weekend. Leading up to the final demo, I will take the stable intersection videos so we can at least have hardcoded lane boundaries for demo purposes, and fine-tune that code to provide accurate and clear vehicle and pedestrian counts that match the input format of Kaitlyn’s optimization algorithm and simulation. I will also assist with any integration that is required between the optimization and traffic light circuit subsystems.

Learning Strategies

Throughout the course of this project, there have been many setbacks with setup and installation issues, as well as hardware that is incompatible with our project goals (for example the various IP camera issues that we have been having, where the RTSP URL is not accessible.) A lot of the learning strategies I have adapted to deal with these issues have been to scour through forums on the Internet (since most issues that I run into have probably been experienced by someone else in the past) and find example projects online that achieve something similar to what I want (for example, projects that allow you to view an IP camera feed from a Raspberry Pi.) I then look at the parts and software used for those projects and order similar (if not the same) parts for our project’s purposes.

When determining which model to use for the object detection, I ran into a lot of problems setting up the correct environment. Initially, I tried to run the models I found on GitHub locally — and when I was still planning on using Haar cascades, I tried training the model locally as well — but due to version mismatch issues and conflicts with other software I have installed on my computer I had to pivot to using Anaconda and sometimes even Colab to test the models and see how accurate they were on our sample intersection videos. In these cases, I used ChatGPT to help me diagnose some of the problems I was facing and brainstorm solutions to fix them (for example, I wasn’t sure how to make sure that any existing installations of OpenCV I had were uninstalled in my Anaconda environment and how to install a specific version, and ChatGPT gave me clearly outlined steps for how to do it — which worked!)

Deliverables

By the week of the demo (Monday 4/29), I will:

  • Take updated intersection videos using the tripods
  • Put together hardcoded lane boundaries using these videos in order to output accurate vehicle and pedestrian counts for each side of the intersection
  • Assist with further integration between the Raspberry Pi and Arduino/PCB
Kaitlyn’s Status Report for 4/20/24

Kaitlyn’s Status Report for 4/20/24

Work Done Since my last status report I have made a lot of progress on the project and am pretty much wrapping up all the tasks I have to do. I realized that the way I was calculating the actions were incorrect and that Q-learning 

Team Status Report for 4/6/24

Team Status Report for 4/6/24

Potential Risks and Mitigation Strategies Some risks we currently have are that the additional code we need to add to the object detection model to only identify cars that are waiting at one side of the intersection may add additional delays to our system — 

Kaitlyn’s Status Report for 4/6/24

Kaitlyn’s Status Report for 4/6/24

Work Done

This week I spent most of the time working on the q-learning model that optimizes the traffic light intervals. I finished the basic model and it is working, however it seems like the model keeps converging to really small intervals for the traffic lights, which I am working on debugging. However, the model technically runs and trains at the moment. I think the problem right now is I might be using the Q-values incorrectly and representing them inaccurately. I plan on taking a deeper look the next couple of days and also trying to represent the action differently.

The code below shows the Agent class that is used to implement Q-learning.

The code below shows the neural network used to implement deep Q-learning. The Q-values are represented by the neural network.

Additionally, I refactored a lot of the existing code to make the utility functions I previously designed more accessible to the Q-learning code. I also made some changes to how the code uses the API data to adjust the simulation, however I realized this led to some bugs when running the simulation for over 5 minutes, so I plan on reverting to the previous API code integration instead. If time allows, I will optimize this further after the other tasks are done.

Schedule

I am mostly on track, however I might need to push the ML code task a day or two longer since I still haven’t had a chance to optimize for hyperparameters. I plan on spending additional time on the project the weeks leading up to the deadline, so I think we will still finish on time.

Tasks this Week

  • Finish ML debugging
  • Finish ML hyperparameter optimization
  • Implement pedestrians in SUMO simulation

Testing and Verification

Reduced wait times: We have a week dedicated to testing where I will first simulate a simulation without any traffic light optimization. I will use the functions that are used by our Q-learning model to calculate average light waiting times at the intersection to then calculate the average light waiting time of 10 different averages over 5 minutes in the simulation and average the 10. I will then repeat the same for 10 averages over 5 minutes using our optimization model and average the 10 to compare the improvements to see if they match our target reduction in wait time.

Safety: I also plan on running the model for a simulation time of an hour to ensure that the system remains working for long periods of time as well as no safety violations occur. SUMO simulates car crashes, so in the event that safety violations do occur due to light intervals being too short, etc, we would be able to adjust our model to fix these concerns.

Ankita’s Status Report for 4/6/24

Ankita’s Status Report for 4/6/24

Work Done I did manage to get a faster object detection model (found in this Git repo) – it’s a YOLOv3 model (so a bit less accurate than the YOLOv4, but it works for our purposes and is less computationally intensive.) This model is also