Yasaswini’s Status Report for 4/24

Completed this week: 

  • Finished debugging the implementation of the training model and ran it!
    • Fed it all the images that we manually took at the stoplight (picture below is an example)

  • Outputted a cropped image of the traffic light for the image that was fed in
  • Created an output folder with the cropped images for further processing/testing (detection algorithm)
  • In all the pictures run, there were definitely cases which weren’t perfect
    • Identifying the side of a stoplight facing the wrong direction – this is not a hazard because in this case the system would simply say that it hasn’t detected anything since the lights aren’t shown at all
    • Not being able to identify the traffic light because it was very distant or from some sort of weird angle
      • We will need to adjust for these to improve upon our system’s overall accuracy
    • One beneficial thing is that so far the system hasn’t identified the WRONG traffic light in the picture so that’s good for safety concerns

For Next Week: 

  • Next week is primarily focused on integration and testing
    • As it can be seen, some of the pictures aren’t exactly cropped to perfection although my algorithm tries to to as accurate as possible. This could be a potential issue and needs to be tested along with Ricky’s algorithm to see if our system can work
    • Also the pictures that weren’t as accurately outputted by the training model at dawn, potentially because the rays of sunlight made it difficult
      • Thus these algorithms need to be tested with the Raspberry Pi’s camera next week because the ones we tested with were taken with Ricky’s phone, which could potentially have higher resolution

Leave a Reply

Your email address will not be published. Required fields are marked *