Shayan’s Status Report for 5/8

PROGRESS MADE

My sole focus area on this week was mainly the Look Light traffic light detection cum Traffic Light Color algorithm. As mentioned last week, my most prominent objective was to increase the classification accuracy of the algorithm given the outputs from the CV algorithm.

To address these new problems and leverage the fact that the images reliably only contain (at least portions of) traffic lights, I explored how much more accuracy I could afford by scrapping circle detection altogether and primarily relying on color filtering and thresholding. In other words, this alternative algorithm would still filter on yellow, red, and green in series with thresholding but now use the argmax (argmax of {red, yellow, green}) of the sum of remaining pixels post-thresholding  as a means of determining the color.

Results

This approach did increase the classification accuracy. The previous iteration algorithm’s accuracy with the inputs passed in from the CV algorithm were unfortunately around 50% no our validation set, which is unacceptable. The new algorithm’s accuracy increased to nearly 90%, which was close to our overall goal metric from our Design. The full results  are reported below:

Overall:
Non-Look Light 14
Look Light 29
Total 43
Of Look:
Actual R Actual Y Actual G
Output R 14 2 0
Output Y 0 2 0
Output G 1 0 10
Correct 0.896552
False Pos 0.034483
False Neg 0.068966
PROGRESS

The LookLight algorithm is now at a (more) acceptable state for the Final demo.  🙂

DELIVERABLES FOR NEXT WEEK

Fix any small problems when integrating.

Present / demo our findings! 🙂

Jeanette’s Status Report for 5/8

This Week:

  • Did further real time testing with the device
  • Went to various crosswalks and used the device
  • Prepared for the final presentation

I went to various stoplight but the device has not been able to detect the state of the light from mostly the filters. Below are example photos:

The image of the cropped test 1 stoplight
Image of captured stoplight for test 2
Cropped image for stoplight for test 2
The image of the stoplight for test 1

Next week:

  • Work with Ricky to debug the filters
  • prepare final report, video, poster

Team Status Report for 5/8

This week we focused on debugging our device and also assembling the belt and attaching it to our Raspberry Pi + camera. We tested it out at the stoplight again and found that in some cases it was having problems, so debugged those.

Risks and Mitigations

Current risks include the latency of our device and the accuracy. We had to settle for a minimum for the accuracy, but the latency is currently too long for a user to safely use it at a traffic light. We are looking into what else could be done to cut that time down.

Schedule

We need to just wrap up our final presentation requirements such as the video, poster, and final report. Last minute things include maybe any adjustments to the belt to make it more comfortable.

 

Yasaswini’s Status Report for 5/8

Completed this week:

  • Did more real time testing
    • Focused on debugging why it wouldn’t recognize the color
    • Went through the pictures taken and tried to see what was being identified/cropped
      • Traffic light is being recognized and cropped but the color isn’t being detected in some cases
  • Helped assemble the Raspberry Pi and the camera to the belt
    • Used sticky glue/velcro to get it to stick
    • Tested out the fit

For Next Week:

  • Prepare for the final presentation!
    • Film the video
    • Work on the poster

Team Status Report 5/1

This week we integrated all our parts and debugged any issues that arouse

Risks and Mitigations

We found that the raspberry pi processor does not work quickly with the tensor flow application. Our latency is ~26 seconds which is way above what we expected. We are going to prioritize latency over accuracy as of now.

Schedule

Down to the last week, we are going to clean up as many issues as possible. This includes latency and accuracy. We will finish up testing and prepare the powerpoint, poster, and video

Jeanette’s Status Report 5/1

This week:

  • Added Yasaswini and Ricky’s Code to existing python script
  • Downloaded necessary libraries fro tensorflow and opencv
  • Able to work connected to computer
  • Debugged headless application
  • Tried out different tensorflow installations to improve latency
  • Added quality sounds to the device

Video of everything:

https://drive.google.com/file/d/1DA2z_qaKQITt7os435voEoPVuz3JcVrX/view?usp=sharing

Latency is still extremely high at ~26seconds per run

  • Tested at stoplight but did not work

image of stoplight at test location

Next Week:

  • Attempt to decrease latency
  • Connect device to belt
  • deliverables for finals week

Shayan’s Status Update for 5/1

PROGRESS MADE

My sole focus area this week was further Look Light traffic light detection cum Traffic Light Color algorithm. The one difference for this week versus previous weeks is that I’ve been using the snipped traffic light images from the CV algorithm output rather than the full scene.

My approach thus far was essentially applying my existing algorithm for each individual color filter (“red”, “green”, “yellow”), getting the number of circles identified and picking the color which resulted in the most circles.

This week, I tested the effectiveness of this algorithm against the outputs of the CV algorithm. Unfortunately, the accuracy was not consistent given new factors that have risen from the CV outputs. This wasn’t all bad news, however, because the detection (or lack thereof) of non-Look Lights was unchanged. The changes in accuracy just affected the classification of the color of the Look Lights.

The biggest issue I faced was with regards to the CV algorithm’s snipping itself. Sometimes the algorithm didn’t not capture all three lights in the traffic light (Problem 1). Sometimes the algorithm cut off a portion of the lights, so they were no longer entirely circular (Problem 2).

I was able to correct accuracy issues with regards to Problem 1, assuming that the illuminated light was preserved in its entirety. I simply modified my algorithm’s thresholding to ensure that just the illuminated light surpasses the threshold. Therefore, the output of this threshold, for all images with the entire illumated light, is basically a colored circle.

Problem 2 has been harder to solve, particularly because there are no longer complete circles. Rather the Hough Transform now looks for curved edges. Unfortunately, reflections of the colored illumination on the traffic light fixture also show up as colored, curved shapes. This is the largest current source of misclassification of color right now.

 

PROGRESS

Derailed a bit to adjust to new issues in snipped traffic light outputs. Hopeful it will be resolved in the coming days (see plan, below).

DELIVERABLES FOR NEXT WEEK

To address these new problems and leverage the fact that the images reliably only contain (at least portions of) traffic lights, I will see how much more accuracy I can afford by scrapping circle detection altogether and primarily relying on color filtering and thresholding. In other words, this alternative algorithm will still filter on yellow, red, and green in series with thresholding but now use the argmax (argmax of {red, yellow, green}) of the sum of remaining pixels post-thresholding  as a means of determining the color.

Yasaswini’s Status Report for 5/1

Completed this week:

  • Combined and integrated Ricky’s and my code into one file so that they both work in conjunction
    • Changed my code so the model wouldn’t have to be downloaded every time it ran and so that it wouldn’t have to use the internet
  • Tested out the latency and cut down/changed certain code in order to make the model run faster
    • Got the time down to 10 – 12 sec total, which is still a large amount of time
  • Tested the code along with Jeanette and debugged through Raspberry Pi problems
    • The code successfully ran and outputted the correct color of the traffic light with the Raspberry Pi connected to a computer, but it isn’t working when not connected to a computer
      • Narrowed the problem down to an issue with the code running on rc.local
  • Below is the image the model takes in and the output of our code along with the latency (which in this case doesn’t include the  time for the camera):

For Next Week:

  • Need to get the Raspberry Pi to run the code without it being connected to a computer
  • Need to actually test the device at a stoplight to get real time accuracy statistics

Team Status Report for 4/24

StATUS
  • Traffic Light CV – Yasaswini
    • Functional CV model to detect boundaries of traffic lights in a scene (yay!)
  • Look Light Detection – Shayan
    • Improvements on traffic light detection
    • Work on detection using full scene in case CV doesn’t properly work
  • Raspberry Pi Interface – Jeanette
    • Validation testing of camera and button setup.
    • Beginning testing code integration with the code in the GitHub right now.
CHANGES TO THE SYSTEM
  • Nothing new this week 🙂
CURRENT RISKS AND MITIGATIONS
  • No new risks/mitigations this week. Shayan needs to validation of his algorithm with the outputs of the Yasaswini’s CV as input.