Weekly Status Reports

Kaitlyn’s Status Report for 4/27/24

Kaitlyn’s Status Report for 4/27/24

Work Done This week I spent a lot of time preparing the Final Presentation slides. I modified the slides to be more traffic light themed and also reorganized the structure of our slides to be more cohesive and based on subsystems to reduce redundancy. The 

Team Status Report 4/27/24

Team Status Report 4/27/24

Potential Risks and Mitigation Strategies

At this point, really the only issues we could potentially encounter are if the fixed version of the PCBs don’t actually work as intended and if the overall latency for the integrated system is higher than we had planned for when we test it on our final video footage. In the scenario that the PCB doesn’t properly, we do have a breadboarded backup rigged to one copy of the previous version of the PCB that we can use for our demo. If our latency is higher than expected, we will continue to try to find places in the code where we can increase efficiency, but we should still be able to demonstrate that the system is working on demo day, even if it runs a bit slower than we had designed for.

Changes to System Design

At this point the only change in the system design since the last status report will be when we integrate the fixed version of the PCB. Other than that, all of the other portions of the project are more or less completed with some minor adjustments needed here and there.

Testing

Object Detection

  • Accuracy: captured 100 frames from 3 stable videos of intersection and determined error (# of cars detected / # of cars actually there) across 100 frames, then averaged it. 80% for vehicle, 90% for pedestrians. This method of detecting error makes sense because there are no false positives from this model (unlike with the Haar cascades.)
  • Latency: 0.4 s to decode each frame on personal computer, 4 s on Raspberry Pi. When running some updated code that added threading to combat delay from frame processing, the Raspberry Pi crashed pretty often; so the proof-of-concept demo for object detection may need to be on my personal computer if I can’t figure out some way to make it less computationally intensive.

Optimization

  • I had unit tests to make sure the traffic API was working as well as the average wait time calculations.
  • For this subsystem, I tested correctness by running the machine learning algorithm and printing out the Q-values at various points to make sure my logic was correct and it was actually selecting the maximum Q-value and acting on it and also making sure there were no dimension errors and that the changed state of the light corresponded to the algorithm’s output.
  • Wait Time Reduction: I calculated this by running the initial SUMO simulation on Fifth and Craig with just a fixed-time light. We gathered the times for the light based off of the footage we recorded for the Object Detection algorithm. We also plan on going back and checking the light at various times in the day to see if it is a dynamic light to make our control more accurate. We then tested the same SUMO simulation on the Fifth and Craig intersection, however this time with our TraCI script sending how long the traffic light should be green at each side of the intersection to the SUMO simulation. The script calculates the intervals with the Q-Learning optimization algorithm. I am currently at a 48.9% wait time reduction after averaging the wait times of 8 periods of 3600 seconds in the simulation both with and without the algorithm performing actions.
  • Latency: I added logging to log before and after one iteration of the Q-Learning calculation to see how long it took to do the calculations. I got a .1024 second latency from this calculation over 10 iterations of the light interval calculation.
  • The metrics of this subsystem exceeded target metric values so I don’t think we need to make many adjustments, however I do plan on graphing the wait time over episodes performed to see if we can optimize hyperparameters accordingly.

Traffic Light Circuit

  • There aren’t very many quantitative metrics for this component of the project, since the success of the circuit is based mostly off of whether or not it reflects the desired changes. The only potential source of latency from the circuit is from the communication between the Arduino and the TLC5928 LED driver chip, which consistently took less than 20 microseconds when measured using print statements to the Arduino serial monitor. Other than this test, I wrote a testbench that cycles through all of the possible combinations of light patterns that we need to represent, and the current circuit (the PCB connected to the breadboarded LEDs) had no issues with handling this. I will just need to ensure that our final PCB has the desired behavior as well once it is fully assembled.
  • Ankita and I also tested the serial communication between the RPi and the Arduino, and found that we were having no trouble getting the Arduino to reflect the desired outputs based on the state determined by the simulation running on the RPi.

Overall Takeaways

  • Preliminary testing for subsystems completed
  • Some of the integration has been tested, but we still need to assemble the entire system and evaluate if there are any hidden latency issues that need to be addressed
  • Despite some challenges in the last couple of weeks, we really pulled everything together and are nearly ready for demo day
Ankita’s Status Report 4/27/24

Ankita’s Status Report 4/27/24

Work Done This week, I spent most of the time conducting initial tests (for latency and accuracy of the object detection model, which I did by evaluating the performance of the model on 3 different videos of the intersection and ~100 frames) and working on 

Zina’s Status Report for 4/20/24

Zina’s Status Report for 4/20/24

Since my last status report, I made a lot of progress on my portion of the project, but also encountered some unexpected challenges. I tested out the PCBs that we received by hooking up some LEDs to a breadboarded circuit and ran the Arduino code 

Team Status Report for 4/20/24

Team Status Report for 4/20/24

Potential Risks and Mitigation Strategies

The concurrent videos we took at the intersection this week involve a lot of hand-shake, so hardcoded lane boundaries are not an option. Currently, code that detects the lane boundaries using Canny edge detection is not working, so we ordered tripods for a steady camera angle and will be retaking those videos next week.

There is no way to systematically generate pedestrians in the simulation, however we don’t think this is a big issue since for the demo we will be manually spawning pedestrians according to what the camera would see.

Our simulation also has cars that spawn with very predictable routes, so the algorithm may be calculated as more efficient than it would be in real life. We plan on adding more randomness to the simulation before the final demo.

Our traffic light PCB needs to be re-ordered due to an issue with some of the pins. The risk mitigation in this case would be to use a breadboarded circuit.

Changes to System Design

The Q-learning model takes in the length of the light interval instead of the current state it should be at. The reasoning for this was explained more thoroughly in Kaitlyn’s status report, but it was mostly due to ease of implementation as well as greater safety guarantees.

Overall Takeaways and Progress

  • Optimization algorithm working with very good improvement in comparison to fixed-time interval implementation
  • Integration between RPi and Arduino should be good to go once PCB is ready
  • PCB needs to be reordered
  • Object detection code needs to be improved / new videos need to be taken
  • Simulation has been fixed and runs indefinitely, allowing us to train the ML model
  • ML model has been trained and works with almost 50% reduction in wait time!

 

Ankita’s Status Report for 4/20/24

Ankita’s Status Report for 4/20/24

Work Done I ordered the new IP camera/portable batteries and they came in this week, so after getting the demo ready for pre-recorded videos at the intersection I will test the object detection model on the live video feed (after looking at tutorials/examples online that 

Kaitlyn’s Status Report for 4/20/24

Kaitlyn’s Status Report for 4/20/24

Work Done Since my last status report I have made a lot of progress on the project and am pretty much wrapping up all the tasks I have to do. I realized that the way I was calculating the actions were incorrect and that Q-learning 

Team Status Report for 4/6/24

Team Status Report for 4/6/24

Potential Risks and Mitigation Strategies

Some risks we currently have are that the additional code we need to add to the object detection model to only identify cars that are waiting at one side of the intersection may add additional delays to our system — so in order to reduce the complexity of this code, we may choose to hardcode lane boundaries and have them be specific to each video rather than detect the lanes using CV algorithms.

Another risk that we have is that our Q-learning algorithm for optimization keeps converging to very short traffic light intervals — if we can’t address this issue, we may change our approach to a simpler model (like a decision tree, for example.)

Our traffic light PCB came in this week. If the circuit doesn’t work, we’ll find a breadboard-able version of the LED driver chip and build the circuit on a breadboard.

Changes to System Design

The main changes to the design are the following:

  • For our demo, we will not be using a live camera feed or integrating the object detection code with the optimization algorithm for reasons described in Ankita’s status report for this week. Because the live camera feed and prerecorded footage will reflect real conditions according to a fixed-interval system while the optimization algorithm will assume that the object counts it receives will change according to the calculated light intervals, the optimization algorithm will not work with object counts directly from the detection model. We will demonstrate these two systems separately.

Our changes are reflected in this updated schedule.

Testing Plan

As a team, we will mainly be conducting integration tests.

Firstly, we must ensure that the time taken to detect objects from a video frame together with the time taken to determine the updated light state and display it on the PCB does not exceed 5 seconds. To do this, we will need to determine the latency of each component and add them together.

The primary integration will be between the Raspberry Pi running the optimization algorithm and the PCB mounted on top of the Arduino. The testing for this will ensure that the PCB accurately reflects the light states determined by the optimization algorithm. We are expecting 100% accuracy for this.

Overall Takeaways and Progress

  • New YOLO model working and delay is within specifications
  • Basic optimization algorithm written – now in debug stage
  • The PCB has arrived and the assembly/testing process will begin this week

 

Kaitlyn’s Status Report for 4/6/24

Kaitlyn’s Status Report for 4/6/24

Work Done This week I spent most of the time working on the q-learning model that optimizes the traffic light intervals. I finished the basic model and it is working, however it seems like the model keeps converging to really small intervals for the traffic