Shayan’s Status Report for 5/8

PROGRESS MADE

My sole focus area on this week was mainly the Look Light traffic light detection cum Traffic Light Color algorithm. As mentioned last week, my most prominent objective was to increase the classification accuracy of the algorithm given the outputs from the CV algorithm.

To address these new problems and leverage the fact that the images reliably only contain (at least portions of) traffic lights, I explored how much more accuracy I could afford by scrapping circle detection altogether and primarily relying on color filtering and thresholding. In other words, this alternative algorithm would still filter on yellow, red, and green in series with thresholding but now use the argmax (argmax of {red, yellow, green}) of the sum of remaining pixels post-thresholding  as a means of determining the color.

Results

This approach did increase the classification accuracy. The previous iteration algorithm’s accuracy with the inputs passed in from the CV algorithm were unfortunately around 50% no our validation set, which is unacceptable. The new algorithm’s accuracy increased to nearly 90%, which was close to our overall goal metric from our Design. The full results  are reported below:

Overall:
Non-Look Light 14
Look Light 29
Total 43
Of Look:
Actual R Actual Y Actual G
Output R 14 2 0
Output Y 0 2 0
Output G 1 0 10
Correct 0.896552
False Pos 0.034483
False Neg 0.068966
PROGRESS

The LookLight algorithm is now at a (more) acceptable state for the Final demo.  🙂

DELIVERABLES FOR NEXT WEEK

Fix any small problems when integrating.

Present / demo our findings! 🙂

Shayan’s Status Update for 5/1

PROGRESS MADE

My sole focus area this week was further Look Light traffic light detection cum Traffic Light Color algorithm. The one difference for this week versus previous weeks is that I’ve been using the snipped traffic light images from the CV algorithm output rather than the full scene.

My approach thus far was essentially applying my existing algorithm for each individual color filter (“red”, “green”, “yellow”), getting the number of circles identified and picking the color which resulted in the most circles.

This week, I tested the effectiveness of this algorithm against the outputs of the CV algorithm. Unfortunately, the accuracy was not consistent given new factors that have risen from the CV outputs. This wasn’t all bad news, however, because the detection (or lack thereof) of non-Look Lights was unchanged. The changes in accuracy just affected the classification of the color of the Look Lights.

The biggest issue I faced was with regards to the CV algorithm’s snipping itself. Sometimes the algorithm didn’t not capture all three lights in the traffic light (Problem 1). Sometimes the algorithm cut off a portion of the lights, so they were no longer entirely circular (Problem 2).

I was able to correct accuracy issues with regards to Problem 1, assuming that the illuminated light was preserved in its entirety. I simply modified my algorithm’s thresholding to ensure that just the illuminated light surpasses the threshold. Therefore, the output of this threshold, for all images with the entire illumated light, is basically a colored circle.

Problem 2 has been harder to solve, particularly because there are no longer complete circles. Rather the Hough Transform now looks for curved edges. Unfortunately, reflections of the colored illumination on the traffic light fixture also show up as colored, curved shapes. This is the largest current source of misclassification of color right now.

 

PROGRESS

Derailed a bit to adjust to new issues in snipped traffic light outputs. Hopeful it will be resolved in the coming days (see plan, below).

DELIVERABLES FOR NEXT WEEK

To address these new problems and leverage the fact that the images reliably only contain (at least portions of) traffic lights, I will see how much more accuracy I can afford by scrapping circle detection altogether and primarily relying on color filtering and thresholding. In other words, this alternative algorithm will still filter on yellow, red, and green in series with thresholding but now use the argmax (argmax of {red, yellow, green}) of the sum of remaining pixels post-thresholding  as a means of determining the color.

Shayan’s Status Update for 4/24

Progress Made

My sole focus area this week was further Look Light traffic light detection cum Traffic Light Color algorithm. This image processing / pattern recognition based algorithm is based on the Hough transform, which returns the positions of circles of radii within a specified range as found within an input image. Because of the potential that the ML model would not be trained for the time being, I decided to work on using my existing algorithm to deduce the color of the light from the image of the whole scene as opposed to individual lights.

My approach was essentially applying my existing algorithm for each individual color filter (“red”, “green”, “yellow”), getting the number of circles identified and picking the color which resulted in the most circles.

Results:

  • No significant change in accuracy *
  • Increased latency (took 0.1-0.2 seconds to run for a full scene)

* On a side note, I was able to improve accuracy since the last status update by increasing the intensity of the median blur (larger kernel) to reduce the effect of noise that would be erroneously tagged as circular. (One example of such erroneous tagging, was the “detection” of circles in patches of leaves/shrubbery.)

With Yasaswini’s recent progress in her algorithm, my algorithm will no longer have to search for circles in the entire scene, as originally intended. I anticipate the accuracy to increase as I test my algorithm integrated with Yasaswini’s algorithm.

Progress

Progress made to actually output a color as opposed to a list of circles. Progress made to slightly improve accuracy. Seems like the right track. 🙂

Deliverables for Next Week
  • Testing and further refinement with validation set traffic lights detected from Yasaswini’s algorithm, which she uploaded to the GitHub.

Shayan’s Report for 4/10

Progress Made

My sole focus area this week was further Look Light traffic light detection cum Traffic Light Color algorithm. This image processing / pattern recognition based algorithm is based on the Hough transform, which returns the positions of circles of radii within a specified range as found within an input image.

My work was broken up into the following areas:

  • Light Exposure (concern for Look Light and State Detection)
    • A holdover problem from last week was with overexposed images in lower light settings (i.e. some of my validation photos taken at dusk). In these photos, the lights were white in color towards the center of the light, therefore not conserved when passed a red, yellow, and/or green filter. My first stab at fixing this was to regularize the maximum RGB intensity of  each photo through some simple pixelwise arithmetic with a reference intensity. This technique, after experimenting with various references intensities, yielded marginal improvement. Similarly, converting to HSV and doing something similar did not yield further significant improvements. Adding a white filter a la the red, yellow, and green filters did not yield any further improvement and actually worsened the accuracy for some photos.
      I have not yet tested using a threshold of RGB or HSV intensity in combination with any techniques above, so I will be testing this strategy next week.
  • Look Angle (only problem for Look Light)
    • I experimented with various parameter ranges / thresholds in the Hough transforms to be more “lenient” in detecting curved edges that don’t strictly form perfect circles. In combination with red, yellow, green filters, there was some success, but the parameters need to be further fine tuned.
  • Possible Amended Combined Approach
    • With regularization on the actual size of the traffic light (assuming that Yasaswini’s algorithm passes in the image of just the light in the scene to this algorithm), I am hypothesizing that simply applying each color filter and taking the sum of the resultant pixel values will yield the light color for the Look Light with the highest probability. I will test this out next week.
Progress

Not as much progress as desired. Improvements w.r.t. exposure and look angle were not as high as desired. Hopefully, the amended combined approach is a useful experiment. If accurate, it certainly will cut down on latency too, a metric I haven’t been paying too much attention to as of yet.

Deliverables for Next Week
  • Experimentation with amended combined approach, above.
  • Further improvements w.r.t exposure and look angle.

Shayan’s Status Report for 4/3

Progress Made

Continuing from last week, my sole focus area this week was the Look Light traffic light detection algorithm. Whereas, last week dealt more with me better understanding the Hough Transform itself, this past week dealt more with me fine tune the Transform parameterization and preprocessing image inputs to the Transform.

Problems going into this week (recap from my last report):
  • Detecting multiple circles
  • Detecting just circles instead of circles and various curved edges

After last week, I was able to partially resolve these issues, but circles were still being mislabeled in simpler images of traffic lights, in Fig 1 below. Notice how, for the yellow light, the transform detected the curved edge of the light fixture above the light (in which yellow light was reflected) rather than the light itself.

Fig 1. Somewhat Faulty Hough Transform Output
Preprocessing
  • Image Size Regularization
    • The Hough Transform looks for circle with radii in a define range. Given that image sizes may vary, it is difficult to specify a range that will detect circles in the lights in every image of a traffic light. Through empirical means, I was able to resize incoming images to the same height dimensions (~400 px).
  • Color Filtering
    • Because the Hough Transform is meant to detect the circular lights themselves, I can leverage the fact that said lights are colored red, yellow, or green. As such, I implemented these color filters, which dramatically increased accuracy in the simple image. The filters basically operate as masks that “filter” via a bitwise/pixelwise logical and. The thresholds from each masks are obtained by converting the image to HSV and looking for HSV values in pre-specified ranges of red, green, and yellow.
Fig 2. With Image Regularization and Red Filter
Fig 3. With Image Regularization and Yellow Filter
Fig 4. With Image Regularization and Green Filter

 

Fig 5. With Image Regularization and R, Y, G Filters Combined 
Testing on Less Perfect Images
  • Presents new kinks to work out (various lighting environments seem to be posing the next biggest challenge) (see Deliverables section below)
  • Success… but only using the red filter.
    • Problematic because each image needs to filter for all possible light colors. See current results for just red versus red, yellow, green filters applied below. (Notice how there’s a lot of green in the picture from the well exposed leaves.)
Fig 6. Erroneous Detection with R+Y+G filters
Fig 7. Near Perfect Detection with just R filter

Progress

Look Detection algorithm progress may seem slow. However, because it incorporates color filtering, this progress may also help me with light color detection in the future. (It’s already distinguishing the colors of simple lights as show in Fig 2-4).

Deliverables for Next Week

  • Main priority: Keep optimizing the look light algorithm to account for patches of high intensity / saturation.

Addendum

Figure 8: Current Runtime (sec) of Each Subroutine
Figure 9: Image after R,Y,G masks

Shayan’s Status Report for 3/27

Progress Made

My sole focus area this week was the Look Light traffic light detection algorithm. This image processing / pattern recognition based algorithm is based on the Hough transform, which returns the positions of circles of radii within a specified range as found within an input image.

My work was broken up into the following areas:

  • Detecting circles in general
    • It took no time to detect a sole circle in a picture of a single circle. Detecting circles got harder with multiple circles in the picture. I was surprised to learn that the Hough Transform sometimes misses a circle within a group of concentric circles. I learned to optimize Hough transform radii parameters to detect all circles in images with multiple circles.
  • Detecting circles in traffic light images
    • My learnings and parameterization of the Hough Transform for pictures of circles were transferred to use on images of traffic lights as well as traffic lights side-by-side (if you can imagine differently-facing traffic lights for perpendicular directions being hung right next to each other).
  • Increasing accuracy in traffic light circle detection
    • I found out that the Hough Transform, in certain lighting conditions, sometimes “detects” curved edges as part of a circle and therefore “detects” circles that aren’t in the image. I am currently trying to determine, through empirical means, how to isolate just the physical light within the traffic light. I am leveraging the fact that the light of interest is illuminated, meaning that the respective pixels are of higher intensity (aka I can use some kind of filtering / thresholding). This allows me to run the Hough Transform on the isolated light portion of an image.
Progress

Reasonable progress made towards having a robust look light detection algorithm, so I believe I’m still on track. 🙂

Deliverables for Next Week
  • Main priority: Keep optimizing the look light algorithm. Figure out how to best filter/threshold the traffic light image to isolate the light. Then, reassess and fine tune the Hough Transform parameters as needed.

Shayan’s Status Report for 3/13

Progress Made

My main focus this week was working with the traffic light detection algorithm’s validation set. A lot of the work was going in and hand tagging traffic lights – and with the suggestion of detecting more than just traffic lights (from our Design Review feedback) – crossing lights, cars, and other pedestrians. 

Additionally, I began working on the look direction traffic light detection algorithm, which takes in a cropped image of all the traffic lights in a scene and deduces which of the lights, if any, is the light the user is facing. My current implementation is by using the Hough circle transform (available in the opencv library) and looking for the traffic lights in which the more of the lights’ circular shapes are present given the angle at which the user is facing the light.

Progress

On a changed schedule – instead of myself and Yasaswini working on the traffic light detection algorithm first, I am working on the separate light state detection and look direction algorithms while Yasaswini, with Jeanette’s assistance, works on the light detection.

Deliverables for Next Week
  • Zeroth priority: further refine (and FINALIZE) our requirements when meeting with Tom and Rashmi
  • First priority: Continue working on and debugging the look direction algorithm

Shayan’s Status Report for 3/6

Progress Made

My main focus this week was with prepping and structuring the CV algorithm. I’ve been coding the training algorithm using the TensorFlow library and mainly debugging on smaller datasets (so I can run locally). 

I also helped the team finalize our parts order, which was sent out!

Progress

On Schedule! 🙂

Deliverables for Next Week
  • Procure AWS credits and find out how to train model using AWS
  • Train CV model on COCO dataset using TensorFlow
  • Hopefully be able to do some validation with the photos I took too

Shayan’s Status Report for 2/27

Progress Made

My main focus this week was with data collection. I continued to amass photo datasets from online as well as take photos of my own. I made sure to visit a variety of intersections with just traffic lights (including Ellsworth/Morewood, Ellsworth/Amberson, Ellsworth/Aiken, Fifth/Amberson, Fifth/Wilkins). We had some rare sunny weather, so I seized the opportunity to snap some pics then. I also got pics during the normal Pittsburgh overcast, in some light rain, and at night. I took photos at various angles and approaches, as well as with different light stages.
Link to photos so far

Progress

On Schedule! 🙂

Deliverables for Next Week
  • Can always get more photos with more varied weather. (Heavier rain in the forecast!)
  • Set up CV architecture and reacquaint myself with OpenCV

Shayan’s Status Report for 2/20

This week, I did the following:
  • Project Proposal refinement and Scoping
    • Worked with the Team as well as Dr. Sullivan and Rashmi to continue to narrow the scope of our MVP, starting from a short list of possible features to deciding on telling the user when to cross at a specific “ElMo” type intersection
  • Literature Review / Data Collection
    • Helped identify traffic datasets for CV
      • API with Google Street View for images of specific intersection
      • Microsoft et. al. COCO dataset with tagged photos of traffic lights
  • Prep work for circuitry assembly
    • Selection of speaker for user interface
Perceived Team Progress:

On Track 🙂

Deliverables next week:
  • Data Collection
    • Collect pictures of “ElMo” intersections (incl. Ellsworth/Morewood, Ellsworth/Amberson, Fifth/Amberson) at various angles/vantage points, lighting conditions, weather conditions
    • Tagging the traffic lights in said photos
  • Project Requirements
    • Finalize, through literature review and similar past projects, the quantitative benchmarks that are required by the user (i.e. data rate, power consumption, accuracy)
  • Device Components
    • Finalize parts list with team and order