Team Status Report for 4/20/24

Potential Risks and Mitigation Strategies

The concurrent videos we took at the intersection this week involve a lot of hand-shake, so hardcoded lane boundaries are not an option. Currently, code that detects the lane boundaries using Canny edge detection is not working, so we ordered tripods for a steady camera angle and will be retaking those videos next week.

There is no way to systematically generate pedestrians in the simulation, however we don’t think this is a big issue since for the demo we will be manually spawning pedestrians according to what the camera would see.

Our simulation also has cars that spawn with very predictable routes, so the algorithm may be calculated as more efficient than it would be in real life. We plan on adding more randomness to the simulation before the final demo.

Our traffic light PCB needs to be re-ordered due to an issue with some of the pins. The risk mitigation in this case would be to use a breadboarded circuit.

Changes to System Design

The Q-learning model takes in the length of the light interval instead of the current state it should be at. The reasoning for this was explained more thoroughly in Kaitlyn’s status report, but it was mostly due to ease of implementation as well as greater safety guarantees.

Overall Takeaways and Progress

  • Optimization algorithm working with very good improvement in comparison to fixed-time interval implementation
  • Integration between RPi and Arduino should be good to go once PCB is ready
  • PCB needs to be reordered
  • Object detection code needs to be improved / new videos need to be taken
  • Simulation has been fixed and runs indefinitely, allowing us to train the ML model
  • ML model has been trained and works with almost 50% reduction in wait time!

 



Leave a Reply

Your email address will not be published. Required fields are marked *