Team Status Report for February 26, 2022

 

An overview of what we did:

  • We presented our design review
  • We received feedback from the review and went over changes as a group
  • Progress was made in the background subtraction for motion / people detection
  • Progress was made for the object detection as well as a potential solution to our SIFT / SURF issue
  • Went through a comprehensive Flask tutorial, researched and planned out how to upload/download from an AWS S3 bucket, researched on how to use an RPI with the ARDUCAM, and researched extra parts that we need to order (camera module and ribbon for RPI to ARDUCAM)
  • We realized we need to order some additional  inventory so we discussed the items and plan to submit an order this week

In terms of what we will do this upcoming week, we hope to do the following:

  • Setting up the backend and frontend of our flask app
  • Sending dummy data from the RPI to a local machine (i.e. since we need to order a camera module, we cannot take pictures yet. Thus, I’ll send over dummy data to get used to using the RPI so that once our camera module comes in, we’ll be good to go)
  • Continued progress on detection for both motion/people and objects

 

The most significant risks that we know of thus far are:

  • The most significant thus far is still the potential that we will have to work with 2 different camera systems considering the scarcity of raspberry pi’s right now
    • We might have to use a laptop, webcam, and do a different control design
  • We are not sure how well the ARDUCAM lenses will work with images (i.e. we believe that it is a fisheye lens and we may need to order another type of lens if that doesn’t work out with object detection)
  • Another potential risk / consideration is integration since we have made progress in detection, but we need to really think through how are system will actually use the detection in an autonomous integrated manner
  • A risk we have in the background subtraction part of our openCV background subtraction part is that, there’s not been a resolution to keeping track of how many people are on the screen. Mitigation plan for this would probably involve getting help from our instructors and professors in CMU and research beyond for figuring out how to extract information from a grayscale image.

There have been no changes to our schedule, but in terms of the design, we noticed that we will most likely need to upload/download our processed images/data to the cloud (i.e. AWS S3 bucket). Originally, uploading information to the cloud was going to be a reach step, but in order for us to connect the processed data from the CV program to the Flask app, we are probably going to need an intermediate cloud storage space. Since we have used AWS before, we will most likely be moving forward with that.

 

We were able to get our background subtraction algorithm to detect the presence of motion in frame.

Foreground data for MOG2 algorithm
Foreground data for KNN algorithm

 

Leave a Reply

Your email address will not be published. Required fields are marked *