Yasaswini’s Status Report for 5/8

Completed this week:

  • Did more real time testing
    • Focused on debugging why it wouldn’t recognize the color
    • Went through the pictures taken and tried to see what was being identified/cropped
      • Traffic light is being recognized and cropped but the color isn’t being detected in some cases
  • Helped assemble the Raspberry Pi and the camera to the belt
    • Used sticky glue/velcro to get it to stick
    • Tested out the fit

For Next Week:

  • Prepare for the final presentation!
    • Film the video
    • Work on the poster

Team Status Report 5/1

This week we integrated all our parts and debugged any issues that arouse

Risks and Mitigations

We found that the raspberry pi processor does not work quickly with the tensor flow application. Our latency is ~26 seconds which is way above what we expected. We are going to prioritize latency over accuracy as of now.

Schedule

Down to the last week, we are going to clean up as many issues as possible. This includes latency and accuracy. We will finish up testing and prepare the powerpoint, poster, and video

Yasaswini’s Status Report for 5/1

Completed this week:

  • Combined and integrated Ricky’s and my code into one file so that they both work in conjunction
    • Changed my code so the model wouldn’t have to be downloaded every time it ran and so that it wouldn’t have to use the internet
  • Tested out the latency and cut down/changed certain code in order to make the model run faster
    • Got the time down to 10 – 12 sec total, which is still a large amount of time
  • Tested the code along with Jeanette and debugged through Raspberry Pi problems
    • The code successfully ran and outputted the correct color of the traffic light with the Raspberry Pi connected to a computer, but it isn’t working when not connected to a computer
      • Narrowed the problem down to an issue with the code running on rc.local
  • Below is the image the model takes in and the output of our code along with the latency (which in this case doesn’t include the  time for the camera):

For Next Week:

  • Need to get the Raspberry Pi to run the code without it being connected to a computer
  • Need to actually test the device at a stoplight to get real time accuracy statistics

Yasaswini’s Status Report for 4/24

Completed this week: 

  • Finished debugging the implementation of the training model and ran it!
    • Fed it all the images that we manually took at the stoplight (picture below is an example)

  • Outputted a cropped image of the traffic light for the image that was fed in
  • Created an output folder with the cropped images for further processing/testing (detection algorithm)
  • In all the pictures run, there were definitely cases which weren’t perfect
    • Identifying the side of a stoplight facing the wrong direction – this is not a hazard because in this case the system would simply say that it hasn’t detected anything since the lights aren’t shown at all
    • Not being able to identify the traffic light because it was very distant or from some sort of weird angle
      • We will need to adjust for these to improve upon our system’s overall accuracy
    • One beneficial thing is that so far the system hasn’t identified the WRONG traffic light in the picture so that’s good for safety concerns

For Next Week: 

  • Next week is primarily focused on integration and testing
    • As it can be seen, some of the pictures aren’t exactly cropped to perfection although my algorithm tries to to as accurate as possible. This could be a potential issue and needs to be tested along with Ricky’s algorithm to see if our system can work
    • Also the pictures that weren’t as accurately outputted by the training model at dawn, potentially because the rays of sunlight made it difficult
      • Thus these algorithms need to be tested with the Raspberry Pi’s camera next week because the ones we tested with were taken with Ricky’s phone, which could potentially have higher resolution

Yasaswini’s Status Report for 4/10

Completed this week:

  • Changed the model to MobileNet v2 SSD COCO from  ssd_mobilenet_v2_coco because it has a better balance between speed and accuracy for specifically working on a Raspberry Pi
  • Ran into TensorFlow issues on the AWS cloud with the current server our instance on so currently worked on the desktop
  • Configured the new model and set up the classes in the Python code along with a few of the main functions
  • The model is being implemented through a Transfer Learning Framework

For next week:

  • Finish the code for the pipeline (feeding the images into the model)
  • Output the image and see if it’s being classified correctly
  • Might need to make any changes to the code based off of that

 

Yasaswini’s Status Report for 4/3

Completed this week:

  • Set up AWS cloud environment and automatic deployment to github
  • Downloaded all the necessary tools/software on our cloud platform
    • coco dataset interaction tools
  • Started to implement the training model in github
    • Selected a model & currently working on the code

For next week:

  • Fully finish implementing the training model
  • Run images through the model to see the training error and make any subsequent adjustments
  • Start running the model on the validation data set if the model works on the training images

Yasaswini’s Status Report for 3/27

Things done this week:

  • Created EC2 Instances necessary for training on the cloud
  • Set up AWS CodeDeploy instance
  • Established a successful ssh connection to the instance on the cloud
  • Downloaded the necessary tools in the cloud environment

For Next Week:

  • Make sure that automatic deployments are occurring
  • Need to import the training model in
  • Make changes to the training model to better fit the images we are given
  • Feed the training images into the model and see what the accuracy will be

Yasaswini’s Status Report for 3/13

Completed for this week:

  • Worked on specific sections of the Design Proposal Document
  • Set up tensorflow and the environment in python
    • Downloaded all the necessary tools and libraries
  • Started to download the C0C0 dataset
    • Had storage issues so need to wait to deploy to the cloud when we get access
  • Looked into which algorithm to train on once the data comes in

For next week:

  • Need to get the training algorithm up and running
  • Test the training algorithm using the validation set
  • Deploy our code to the cloud so we have more space

Yasaswini’s Status Report for 3/6

Completed Items:

  • Finalized products being ordered
  • Planned out how exactly to design the algorithm to process the images
    • What to use when training in OpenCV
    • How to integrate the cloud (AWS) and our training model in Tensor Flow
  • Worked on the design proposal presentation

For next week:

  • Need to start training the images
    • Start implementing the model decided upon
  • Need to work on the design proposal document

Yasaswini’s Status Report for 2/27

Completed Items for this week:

  • Researched and found a few OpenCV examples
  • Brushed up on OpenCV basics and learned how to handle images in that environment
  • Played around with sample images in order to upload them

To complete next week:

  • Upload the actual images of the Morewood/Elsworth intersection taken
  • Process the images in OpenCV
  • Meet with the visually impaired individual along with the rest of the group