Team Status Report 4/27/24
Potential Risks and Mitigation Strategies
At this point, really the only issues we could potentially encounter are if the fixed version of the PCBs don’t actually work as intended and if the overall latency for the integrated system is higher than we had planned for when we test it on our final video footage. In the scenario that the PCB doesn’t properly, we do have a breadboarded backup rigged to one copy of the previous version of the PCB that we can use for our demo. If our latency is higher than expected, we will continue to try to find places in the code where we can increase efficiency, but we should still be able to demonstrate that the system is working on demo day, even if it runs a bit slower than we had designed for.
Changes to System Design
At this point the only change in the system design since the last status report will be when we integrate the fixed version of the PCB. Other than that, all of the other portions of the project are more or less completed with some minor adjustments needed here and there.
Testing
Object Detection
- Accuracy: captured 100 frames from 3 stable videos of intersection and determined error (# of cars detected / # of cars actually there) across 100 frames, then averaged it. 80% for vehicle, 90% for pedestrians. This method of detecting error makes sense because there are no false positives from this model (unlike with the Haar cascades.)
- Latency: 0.4 s to decode each frame on personal computer, 4 s on Raspberry Pi. When running some updated code that added threading to combat delay from frame processing, the Raspberry Pi crashed pretty often; so the proof-of-concept demo for object detection may need to be on my personal computer if I can’t figure out some way to make it less computationally intensive.
Optimization
- I had unit tests to make sure the traffic API was working as well as the average wait time calculations.
- For this subsystem, I tested correctness by running the machine learning algorithm and printing out the Q-values at various points to make sure my logic was correct and it was actually selecting the maximum Q-value and acting on it and also making sure there were no dimension errors and that the changed state of the light corresponded to the algorithm’s output.
- Wait Time Reduction: I calculated this by running the initial SUMO simulation on Fifth and Craig with just a fixed-time light. We gathered the times for the light based off of the footage we recorded for the Object Detection algorithm. We also plan on going back and checking the light at various times in the day to see if it is a dynamic light to make our control more accurate. We then tested the same SUMO simulation on the Fifth and Craig intersection, however this time with our TraCI script sending how long the traffic light should be green at each side of the intersection to the SUMO simulation. The script calculates the intervals with the Q-Learning optimization algorithm. I am currently at a 48.9% wait time reduction after averaging the wait times of 8 periods of 3600 seconds in the simulation both with and without the algorithm performing actions.
- Latency: I added logging to log before and after one iteration of the Q-Learning calculation to see how long it took to do the calculations. I got a .1024 second latency from this calculation over 10 iterations of the light interval calculation.
- The metrics of this subsystem exceeded target metric values so I don’t think we need to make many adjustments, however I do plan on graphing the wait time over episodes performed to see if we can optimize hyperparameters accordingly.
Traffic Light Circuit
- There aren’t very many quantitative metrics for this component of the project, since the success of the circuit is based mostly off of whether or not it reflects the desired changes. The only potential source of latency from the circuit is from the communication between the Arduino and the TLC5928 LED driver chip, which consistently took less than 20 microseconds when measured using print statements to the Arduino serial monitor. Other than this test, I wrote a testbench that cycles through all of the possible combinations of light patterns that we need to represent, and the current circuit (the PCB connected to the breadboarded LEDs) had no issues with handling this. I will just need to ensure that our final PCB has the desired behavior as well once it is fully assembled.
- Ankita and I also tested the serial communication between the RPi and the Arduino, and found that we were having no trouble getting the Arduino to reflect the desired outputs based on the state determined by the simulation running on the RPi.
Overall Takeaways
- Preliminary testing for subsystems completed
- Some of the integration has been tested, but we still need to assemble the entire system and evaluate if there are any hidden latency issues that need to be addressed
- Despite some challenges in the last couple of weeks, we really pulled everything together and are nearly ready for demo day