Team Status Report for 5/8

This week we focused on debugging our device and also assembling the belt and attaching it to our Raspberry Pi + camera. We tested it out at the stoplight again and found that in some cases it was having problems, so debugged those.

Risks and Mitigations

Current risks include the latency of our device and the accuracy. We had to settle for a minimum for the accuracy, but the latency is currently too long for a user to safely use it at a traffic light. We are looking into what else could be done to cut that time down.

Schedule

We need to just wrap up our final presentation requirements such as the video, poster, and final report. Last minute things include maybe any adjustments to the belt to make it more comfortable.

 

Yasaswini’s Status Report for 5/8

Completed this week:

  • Did more real time testing
    • Focused on debugging why it wouldn’t recognize the color
    • Went through the pictures taken and tried to see what was being identified/cropped
      • Traffic light is being recognized and cropped but the color isn’t being detected in some cases
  • Helped assemble the Raspberry Pi and the camera to the belt
    • Used sticky glue/velcro to get it to stick
    • Tested out the fit

For Next Week:

  • Prepare for the final presentation!
    • Film the video
    • Work on the poster

Yasaswini’s Status Report for 5/1

Completed this week:

  • Combined and integrated Ricky’s and my code into one file so that they both work in conjunction
    • Changed my code so the model wouldn’t have to be downloaded every time it ran and so that it wouldn’t have to use the internet
  • Tested out the latency and cut down/changed certain code in order to make the model run faster
    • Got the time down to 10 – 12 sec total, which is still a large amount of time
  • Tested the code along with Jeanette and debugged through Raspberry Pi problems
    • The code successfully ran and outputted the correct color of the traffic light with the Raspberry Pi connected to a computer, but it isn’t working when not connected to a computer
      • Narrowed the problem down to an issue with the code running on rc.local
  • Below is the image the model takes in and the output of our code along with the latency (which in this case doesn’t include the  time for the camera):

For Next Week:

  • Need to get the Raspberry Pi to run the code without it being connected to a computer
  • Need to actually test the device at a stoplight to get real time accuracy statistics

Yasaswini’s Status Report for 4/24

Completed this week: 

  • Finished debugging the implementation of the training model and ran it!
    • Fed it all the images that we manually took at the stoplight (picture below is an example)

  • Outputted a cropped image of the traffic light for the image that was fed in
  • Created an output folder with the cropped images for further processing/testing (detection algorithm)
  • In all the pictures run, there were definitely cases which weren’t perfect
    • Identifying the side of a stoplight facing the wrong direction – this is not a hazard because in this case the system would simply say that it hasn’t detected anything since the lights aren’t shown at all
    • Not being able to identify the traffic light because it was very distant or from some sort of weird angle
      • We will need to adjust for these to improve upon our system’s overall accuracy
    • One beneficial thing is that so far the system hasn’t identified the WRONG traffic light in the picture so that’s good for safety concerns

For Next Week: 

  • Next week is primarily focused on integration and testing
    • As it can be seen, some of the pictures aren’t exactly cropped to perfection although my algorithm tries to to as accurate as possible. This could be a potential issue and needs to be tested along with Ricky’s algorithm to see if our system can work
    • Also the pictures that weren’t as accurately outputted by the training model at dawn, potentially because the rays of sunlight made it difficult
      • Thus these algorithms need to be tested with the Raspberry Pi’s camera next week because the ones we tested with were taken with Ricky’s phone, which could potentially have higher resolution

Team Status Report for 4/10

StATUS
  • Traffic Light CV – Yasaswini
    • Integrated git with the cloud so its all set up for the model to start training as soon as the current algorithm is done
  • Look Light Detection – Shayan
    • mostly working look light traffic light detection via the Traffic Light Color algorithm but may need some modifications
  • Raspberry Pi Interface – Jeanette
    • Set up camera and the button to the Raspberry Pi and the camera has been checked to be working. With the battery pack connected as well, it’s basically ready to go.
CHANGES TO THE SYSTEM
  • Nothing new this week
CURRENT RISKS AND MITIGATIONS
  • Yasaswini – need to see what the training accuracy is on the images and need this going into next week
  • Shayan – need to further experiment with the angle/light to make sure all situations are accounted for

Yasaswini’s Status Report for 4/10

Completed this week:

  • Changed the model to MobileNet v2 SSD COCO from  ssd_mobilenet_v2_coco because it has a better balance between speed and accuracy for specifically working on a Raspberry Pi
  • Ran into TensorFlow issues on the AWS cloud with the current server our instance on so currently worked on the desktop
  • Configured the new model and set up the classes in the Python code along with a few of the main functions
  • The model is being implemented through a Transfer Learning Framework

For next week:

  • Finish the code for the pipeline (feeding the images into the model)
  • Output the image and see if it’s being classified correctly
  • Might need to make any changes to the code based off of that

 

Yasaswini’s Status Report for 4/3

Completed this week:

  • Set up AWS cloud environment and automatic deployment to github
  • Downloaded all the necessary tools/software on our cloud platform
    • coco dataset interaction tools
  • Started to implement the training model in github
    • Selected a model & currently working on the code

For next week:

  • Fully finish implementing the training model
  • Run images through the model to see the training error and make any subsequent adjustments
  • Start running the model on the validation data set if the model works on the training images

Yasaswini’s Status Report for 3/27

Things done this week:

  • Created EC2 Instances necessary for training on the cloud
  • Set up AWS CodeDeploy instance
  • Established a successful ssh connection to the instance on the cloud
  • Downloaded the necessary tools in the cloud environment

For Next Week:

  • Make sure that automatic deployments are occurring
  • Need to import the training model in
  • Make changes to the training model to better fit the images we are given
  • Feed the training images into the model and see what the accuracy will be

Team Status Report for 3/13

THIS WEEK…
  • Wrote up the Design Proposal Document
  • Received all the ordered parts & started to put them together
  • Tagged the photos we took in order to feed them into the algorithm
  • Set up environment for the pictures to be fed into
STATUS…
  • Need to obtain AWS credits in order to move further with our algorithm
  • Are a bit behind schedule with the testing which was supposed to start this week but continue into next week
CHANGES TO THE SYSTEM
  • Changed the use case to helping visually impaired individuals in their training to cross intersections
CURRENT RISKS AND MITIGATIONS
  • Currently the accuracy rate is the biggest potential risk which we have yet to see the numbers for
    • Currently we are tagging all the stoplights in a given picture frame in order to mitigate this somewhat by accounting for this

Yasaswini’s Status Report for 3/13

Completed for this week:

  • Worked on specific sections of the Design Proposal Document
  • Set up tensorflow and the environment in python
    • Downloaded all the necessary tools and libraries
  • Started to download the C0C0 dataset
    • Had storage issues so need to wait to deploy to the cloud when we get access
  • Looked into which algorithm to train on once the data comes in

For next week:

  • Need to get the training algorithm up and running
  • Test the training algorithm using the validation set
  • Deploy our code to the cloud so we have more space