Team Status Report for 4/6/24

Potential Risks and Mitigation Strategies

Some risks we currently have are that the additional code we need to add to the object detection model to only identify cars that are waiting at one side of the intersection may add additional delays to our system — so in order to reduce the complexity of this code, we may choose to hardcode lane boundaries and have them be specific to each video rather than detect the lanes using CV algorithms.

Another risk that we have is that our Q-learning algorithm for optimization keeps converging to very short traffic light intervals — if we can’t address this issue, we may change our approach to a simpler model (like a decision tree, for example.)

Our traffic light PCB came in this week. If the circuit doesn’t work, we’ll find a breadboard-able version of the LED driver chip and build the circuit on a breadboard.

Changes to System Design

The main changes to the design are the following:

  • For our demo, we will not be using a live camera feed or integrating the object detection code with the optimization algorithm for reasons described in Ankita’s status report for this week. Because the live camera feed and prerecorded footage will reflect real conditions according to a fixed-interval system while the optimization algorithm will assume that the object counts it receives will change according to the calculated light intervals, the optimization algorithm will not work with object counts directly from the detection model. We will demonstrate these two systems separately.

Our changes are reflected in this updated schedule.

Testing Plan

As a team, we will mainly be conducting integration tests.

Firstly, we must ensure that the time taken to detect objects from a video frame together with the time taken to determine the updated light state and display it on the PCB does not exceed 5 seconds. To do this, we will need to determine the latency of each component and add them together.

The primary integration will be between the Raspberry Pi running the optimization algorithm and the PCB mounted on top of the Arduino. The testing for this will ensure that the PCB accurately reflects the light states determined by the optimization algorithm. We are expecting 100% accuracy for this.

Overall Takeaways and Progress

  • New YOLO model working and delay is within specifications
  • Basic optimization algorithm written – now in debug stage
  • The PCB has arrived and the assembly/testing process will begin this week

 



Leave a Reply

Your email address will not be published. Required fields are marked *