Team Status Report for 4/24

StATUS
  • Traffic Light CV – Yasaswini
    • Functional CV model to detect boundaries of traffic lights in a scene (yay!)
  • Look Light Detection – Shayan
    • Improvements on traffic light detection
    • Work on detection using full scene in case CV doesn’t properly work
  • Raspberry Pi Interface – Jeanette
    • Validation testing of camera and button setup.
    • Beginning testing code integration with the code in the GitHub right now.
CHANGES TO THE SYSTEM
  • Nothing new this week 🙂
CURRENT RISKS AND MITIGATIONS
  • No new risks/mitigations this week. Shayan needs to validation of his algorithm with the outputs of the Yasaswini’s CV as input.

Shayan’s Status Update for 4/24

Progress Made

My sole focus area this week was further Look Light traffic light detection cum Traffic Light Color algorithm. This image processing / pattern recognition based algorithm is based on the Hough transform, which returns the positions of circles of radii within a specified range as found within an input image. Because of the potential that the ML model would not be trained for the time being, I decided to work on using my existing algorithm to deduce the color of the light from the image of the whole scene as opposed to individual lights.

My approach was essentially applying my existing algorithm for each individual color filter (“red”, “green”, “yellow”), getting the number of circles identified and picking the color which resulted in the most circles.

Results:

  • No significant change in accuracy *
  • Increased latency (took 0.1-0.2 seconds to run for a full scene)

* On a side note, I was able to improve accuracy since the last status update by increasing the intensity of the median blur (larger kernel) to reduce the effect of noise that would be erroneously tagged as circular. (One example of such erroneous tagging, was the “detection” of circles in patches of leaves/shrubbery.)

With Yasaswini’s recent progress in her algorithm, my algorithm will no longer have to search for circles in the entire scene, as originally intended. I anticipate the accuracy to increase as I test my algorithm integrated with Yasaswini’s algorithm.

Progress

Progress made to actually output a color as opposed to a list of circles. Progress made to slightly improve accuracy. Seems like the right track. 🙂

Deliverables for Next Week
  • Testing and further refinement with validation set traffic lights detected from Yasaswini’s algorithm, which she uploaded to the GitHub.

Yasaswini’s Status Report for 4/24

Completed this week: 

  • Finished debugging the implementation of the training model and ran it!
    • Fed it all the images that we manually took at the stoplight (picture below is an example)

  • Outputted a cropped image of the traffic light for the image that was fed in
  • Created an output folder with the cropped images for further processing/testing (detection algorithm)
  • In all the pictures run, there were definitely cases which weren’t perfect
    • Identifying the side of a stoplight facing the wrong direction – this is not a hazard because in this case the system would simply say that it hasn’t detected anything since the lights aren’t shown at all
    • Not being able to identify the traffic light because it was very distant or from some sort of weird angle
      • We will need to adjust for these to improve upon our system’s overall accuracy
    • One beneficial thing is that so far the system hasn’t identified the WRONG traffic light in the picture so that’s good for safety concerns

For Next Week: 

  • Next week is primarily focused on integration and testing
    • As it can be seen, some of the pictures aren’t exactly cropped to perfection although my algorithm tries to to as accurate as possible. This could be a potential issue and needs to be tested along with Ricky’s algorithm to see if our system can work
    • Also the pictures that weren’t as accurately outputted by the training model at dawn, potentially because the rays of sunlight made it difficult
      • Thus these algorithms need to be tested with the Raspberry Pi’s camera next week because the ones we tested with were taken with Ricky’s phone, which could potentially have higher resolution

Team Status Report for 4/10

StATUS
  • Traffic Light CV – Yasaswini
    • Integrated git with the cloud so its all set up for the model to start training as soon as the current algorithm is done
  • Look Light Detection – Shayan
    • mostly working look light traffic light detection via the Traffic Light Color algorithm but may need some modifications
  • Raspberry Pi Interface – Jeanette
    • Set up camera and the button to the Raspberry Pi and the camera has been checked to be working. With the battery pack connected as well, it’s basically ready to go.
CHANGES TO THE SYSTEM
  • Nothing new this week
CURRENT RISKS AND MITIGATIONS
  • Yasaswini – need to see what the training accuracy is on the images and need this going into next week
  • Shayan – need to further experiment with the angle/light to make sure all situations are accounted for

Shayan’s Report for 4/10

Progress Made

My sole focus area this week was further Look Light traffic light detection cum Traffic Light Color algorithm. This image processing / pattern recognition based algorithm is based on the Hough transform, which returns the positions of circles of radii within a specified range as found within an input image.

My work was broken up into the following areas:

  • Light Exposure (concern for Look Light and State Detection)
    • A holdover problem from last week was with overexposed images in lower light settings (i.e. some of my validation photos taken at dusk). In these photos, the lights were white in color towards the center of the light, therefore not conserved when passed a red, yellow, and/or green filter. My first stab at fixing this was to regularize the maximum RGB intensity of  each photo through some simple pixelwise arithmetic with a reference intensity. This technique, after experimenting with various references intensities, yielded marginal improvement. Similarly, converting to HSV and doing something similar did not yield further significant improvements. Adding a white filter a la the red, yellow, and green filters did not yield any further improvement and actually worsened the accuracy for some photos.
      I have not yet tested using a threshold of RGB or HSV intensity in combination with any techniques above, so I will be testing this strategy next week.
  • Look Angle (only problem for Look Light)
    • I experimented with various parameter ranges / thresholds in the Hough transforms to be more “lenient” in detecting curved edges that don’t strictly form perfect circles. In combination with red, yellow, green filters, there was some success, but the parameters need to be further fine tuned.
  • Possible Amended Combined Approach
    • With regularization on the actual size of the traffic light (assuming that Yasaswini’s algorithm passes in the image of just the light in the scene to this algorithm), I am hypothesizing that simply applying each color filter and taking the sum of the resultant pixel values will yield the light color for the Look Light with the highest probability. I will test this out next week.
Progress

Not as much progress as desired. Improvements w.r.t. exposure and look angle were not as high as desired. Hopefully, the amended combined approach is a useful experiment. If accurate, it certainly will cut down on latency too, a metric I haven’t been paying too much attention to as of yet.

Deliverables for Next Week
  • Experimentation with amended combined approach, above.
  • Further improvements w.r.t exposure and look angle.

Jeanette’s Status Report 4/10

This week:

  • Connected the camera and button to the raspberry pi
  • When the button is clicked, the camera will take a one second video
  • Tested the battery pack and was successful
  • Attempted to program the raspberry pi headlessly
  • Added sound to say “ready” when the button is pressed

Next week

  • Headlessly run the program successfully
  • test on actual street
  • connect the image processing from Ricky

Yasaswini’s Status Report for 4/10

Completed this week:

  • Changed the model to MobileNet v2 SSD COCO from  ssd_mobilenet_v2_coco because it has a better balance between speed and accuracy for specifically working on a Raspberry Pi
  • Ran into TensorFlow issues on the AWS cloud with the current server our instance on so currently worked on the desktop
  • Configured the new model and set up the classes in the Python code along with a few of the main functions
  • The model is being implemented through a Transfer Learning Framework

For next week:

  • Finish the code for the pipeline (feeding the images into the model)
  • Output the image and see if it’s being classified correctly
  • Might need to make any changes to the code based off of that

 

Team Status Report for 4/3

THIS WEEK…
  • Traffic Light CV – Yasaswini
    • With AWS credits procured, need to finalize AWS integration with Git development to facilitate cloud computing
  • Look Light Detection – Shayan
    • With understanding of Hough Transform, need to optimize Look Light search with preprocessing input to Hough Transform
  • Raspberry Pi Interface – Jeanette
    • Now that we have the parts and components, need to set up RPi and interface components with it
STATUS…
  • Yasaswini – AWS cloud deployment setup finalized, integrated COCO dataset with AWS API, and began implementing training model with AWS setup in mind
  • Shayan – further optimized look light detection to more accurately search for the colored lights in traffic lights through image size regularization as well as red, yellow, and green color masks/filters
  • Jeanette – completed Raspberry Pi setup, RPi camera setup and verification, acquired remaining components
CHANGES TO THE SYSTEM
  • Nothing new this week 🙂
CURRENT RISKS AND MITIGATIONS
  • Yasaswini – when running the training next week and checking accuracy, we will be able to better determine shortcomings in the CV algorithm
  • Shayan – need to further preprocess images to account for excess light / saturation in photos

Shayan’s Status Report for 4/3

Progress Made

Continuing from last week, my sole focus area this week was the Look Light traffic light detection algorithm. Whereas, last week dealt more with me better understanding the Hough Transform itself, this past week dealt more with me fine tune the Transform parameterization and preprocessing image inputs to the Transform.

Problems going into this week (recap from my last report):
  • Detecting multiple circles
  • Detecting just circles instead of circles and various curved edges

After last week, I was able to partially resolve these issues, but circles were still being mislabeled in simpler images of traffic lights, in Fig 1 below. Notice how, for the yellow light, the transform detected the curved edge of the light fixture above the light (in which yellow light was reflected) rather than the light itself.

Fig 1. Somewhat Faulty Hough Transform Output
Preprocessing
  • Image Size Regularization
    • The Hough Transform looks for circle with radii in a define range. Given that image sizes may vary, it is difficult to specify a range that will detect circles in the lights in every image of a traffic light. Through empirical means, I was able to resize incoming images to the same height dimensions (~400 px).
  • Color Filtering
    • Because the Hough Transform is meant to detect the circular lights themselves, I can leverage the fact that said lights are colored red, yellow, or green. As such, I implemented these color filters, which dramatically increased accuracy in the simple image. The filters basically operate as masks that “filter” via a bitwise/pixelwise logical and. The thresholds from each masks are obtained by converting the image to HSV and looking for HSV values in pre-specified ranges of red, green, and yellow.
Fig 2. With Image Regularization and Red Filter
Fig 3. With Image Regularization and Yellow Filter
Fig 4. With Image Regularization and Green Filter

 

Fig 5. With Image Regularization and R, Y, G Filters Combined 
Testing on Less Perfect Images
  • Presents new kinks to work out (various lighting environments seem to be posing the next biggest challenge) (see Deliverables section below)
  • Success… but only using the red filter.
    • Problematic because each image needs to filter for all possible light colors. See current results for just red versus red, yellow, green filters applied below. (Notice how there’s a lot of green in the picture from the well exposed leaves.)
Fig 6. Erroneous Detection with R+Y+G filters
Fig 7. Near Perfect Detection with just R filter

Progress

Look Detection algorithm progress may seem slow. However, because it incorporates color filtering, this progress may also help me with light color detection in the future. (It’s already distinguishing the colors of simple lights as show in Fig 2-4).

Deliverables for Next Week

  • Main priority: Keep optimizing the look light algorithm to account for patches of high intensity / saturation.

Addendum

Figure 8: Current Runtime (sec) of Each Subroutine
Figure 9: Image after R,Y,G masks