Takshsheel’s Status Report for April 30, 2022

This week I worked on the following tasks:

  • Attended class and reviewed final presentations for the remainder of the team’s.
  • One out of the 2 counter setups and algorithms was not compatible with a raspberry pi with the camera system, so made it compatible by changing camera parameters and system paths to the camera as well.

This week I’ve not put in nearly enough effort into the class and the project since I focused my attention on the midterms/finals that I had to take 3 days in a row, however, currently, my schedule allows me to put in all the time that I’ve missed out on in the time being, in the coming week. Hopefully I can put in a justifiable and solid amount of time and effort that is expected for the project.

In terms of schedule, I’m still concerned about the porting of the counter code onto our integrated system, however, in the event something goes wrong, I don’t think it should be incredibly long to fix. We should be okay for team schedule in terms of finishing our product for the demo.

For the next week, I hope to have the following deliverables:

  • Final poster
  • Final decisionmo
  • Fully integrated system with counter code
  • Final report
  • Final demo

Takshsheel’s Status Report for April 23, 2022

This week, I worked on the following tasks:

  • Built a second counter program that instead of telling us how much traffic is in and out of a section, gives us information about how many people are present in the aisle.
  • Tested the implementation for various distances, varied number of people, and also checked for false positives.

In terms of schedule, from a preliminary standpoint, things are looking like they should be okay, the background subtraction and detection integration is underway, and provided things don’t catastrophically fall apart, we should be okay prior to the final demo. However, there’s also the risk of creating code that might not work with the other subsystems, and for counters, the integration hasn’t yet been tried mainly due to lack of speed in creating the subsystem on my part, but now that is done, hopefully, we can integrate that in as well.

For next week, I hope to have completed the following tasks.

  • Integrate both counters for different shelf and camera setups and make a decision on which counter is more accurate and has lesser latency.
  • Work on the final presentation slides prior to the presentations in the coming week.

Team Status Report for April 10, 2022.

An overview of what we did:

  • Each team member presented at the interim demo, received feedback and carried that into the remaining work for our project.
  • Each member worked on individual subsystems for our pre-integration project.

In terms of what we will do this upcoming week, we hope to do the following:

  • Complete the purchases/manufacture required for our physical space and system (shelf, camera locations, items, etc)
  • Begin the process of having our scripts recognized by and running on the raspberry pi, and also test the integration by making different requests at different times.

 

The most significant risks that we know of thus far are:

  • Different designs of our system (Takshsheel’s idea for the camera) and also integration of the local machine work into one system. 
  • Completion of our subsystems which is so far on point, however, bugs that are hard to debug could possibly cause serious troubles for completion. It’s not a guarantee, but it is likely we’ll see something of that nature, and it would be critical to solve it at the earliest when that happens. 

 

Changes To our schedule

  • There are no new changes to our schedule since the interim demo.

Lucky’s Status Report for April 10, 2022

This week I did the following tasks:

  • Presented interim demo
  • Considered solution approaches for image processing
    • Researched segmentation for image processing
      • Researched sliding window segmentation
    • Began implementation design of sliding window segmentation
      • This includes testing considerations
        • The test images must be of equal size to the image that will be taken of the aisle shelf from the fixed distance, otherwise sliding window will be either to small or too large
  • Ordered shelf to begin building for integrated testing

 

In terms of schedule, I think I am on pace. As long as the segmentation code I am in the processes of producing is finished by end of class this upcoming Monday lab session

 

What Next

  • Pick up shelf and set up camera at fixed distance that fits the shelf
    • Determine the image dimensions
    • Formulate the constraints for the sliding window
  • Complete sliding window segmentation and test
  • Proceed to integration

Takshsheel’s Status Report for April 10, 2022.

This week, I worked on the following for our capstone project.

  • Performed an interim demo for the background subtraction system which shall be used to detect people walking in and out of frame. Worked on and mentioned a foot traffic counter during the demo, which was incomplete and buggy, which I’ve worked on to fix and as of right now, it can be inaccurate at times, but doesn’t immediately crash like it was previously.
  • Besides the demo and the counter, I drew a design that I’d like to discuss with my teammates during our meeting slot on Monday, although not time-consuming like the debugging for the counter, it is relevant as it could make a last-minute design improvement.
  • Looked into web sockets to transfer data around, since we’d like to use our raspberry pi to run this script and transfer data to our local machine, so, familiarizing myself with the process felt necessary.

In terms of personal progress, I feel like I’ve finally caught back up to where I have wanted to be since the past 2 weeks, and have a good feeling for the project itself.

By next week, I’d like for our team to have made significant progress on the integration of our system, as well as have some physical structure for our system. To make that happen, I’d like to help manufacture the shelves if needed, and set up the camera based off the present/new design that we shall discuss on Monday.

Takshsheel’s Status Report for April 2, 2022.

This week I worked on the following tasks:

  • Met with our team to discuss the integration framework and subsystems performance and requirements for the same.
  • Continued to fix the buggy counter from last week. Close to done fixing, but not just there yet.
  • Built a testing framework for different cameras and videos and have tested at various distances and speeds of movement with various number of people for my counter.

My progress has been behind since last week and the buggy implementation I’ve got is still not fixed, so this is a concern and risk I’m aware of. From an integration perspective, we need a lot of work done as a team and from me, and through carnival break, I have to be working and getting a fix for my counter program.

For next week, personally I would like to have a deliverable for the counter, which was supposed to be done last week, and from a team perspective, a testable integrated software for our MVP idea.

Team Status Report for April 2, 2022

An overview of what we did:

  • We all individually worked on subsystems we were planning on building. Takshsheel worked on his multiple people detection and counting. Our work was centered around finishing up working and testable programs and subsystems. 
  • We prepared what we would like to discuss in the interim demo. For this we met outside of class time and planned and prepared questions and demonstrations.

In terms of what we will do this upcoming week, we hope to do the following:

  • Early upcoming week, so 4/3 and 4/4 we’d like to perfect our tech demo for our subsystems.
  • After 4/4, the focus shall shift to integration and combined tests for the final project. 

 

The most significant risks that we know of thus far are:

  • The most significant risk we have is still the integration of our subsystems. We have met and planned a structure involving web sockets as well. However, the programming needs to be completed.

 

Changes To our schedule

  • There were no new changes recorded to our schedule. Takshsheel’s schedule change from last week is still in affect. 

Takshsheel’s Status Report for March 26, 2022.

This week, in addition to the common team goals, I worked on the following:

  • I worked on getting a people counter working with the use of contours along with background subtraction in openCV.
  • I also discussed with my teammates about ethical considerations that popped up in my mind after our discussion on monday, particularly to do with how our product could be used, and how much of that should be our responsibility, whether or not that is on us to fix, or is it up to the consumer of our product.
  1. In terms of an issue or risk I found, it has taken me longer than I have expected to fix a bug that I have found in my implementation of contouring, and even though I spent a fair amount of time with it, I’m currently not able to understand the bug. Therefore, I’ve decided to look beyond for outside help from faculty members in CMU for advice on how I might either do this differently, or fixing the issue I have found.

This bug I’ve mentioned here definitely puts me behind schedule. I was hoping to have the counter completed by this weekend but I’m currently still stuck with it. Luckily, for our MVP idea, we just need motion detection which is working. However, this is something I was really looking forward to having in our project, so this setback is rather detrimental.

By next week, I hope to have completed the counter, as well as help with the integration process of our wireless communication system.

Takshsheel’s Status Report for February 26, 2022.

This week, I accomplished the following tasks:

  • Worked for a bit on the design review presentation delivery, since I generally struggle with presentations, so it took longer than I thought, but necessary in my opinion.
  • Made a working implementation of a background subtraction algorithm and a framework where we can test various background subtraction algorithms on a pre-recorded video. I tested a couple to see what they looked like myself. Images below are pictures of people moving across the camera. I tested 2 algorithms, MOG2 and KNN, documentation for which I found online.
    Test for KNN. Multiple people in frame

      Test for MOG2. One person
  • Besides the algorithms above, I also tried to extract the data from the grayscale images to see if I could record how many people were in the frame, but I’ve not managed that yet. (Deliverable for next week)

In total I worked on capstone this week for about 11-12 hours, but for sake of overestimation of time I’ll claim 11 hours this week.

In terms of schedule, I feel like I’m finally on track because I’d said that I’d have a background subtraction program working and it is done as of now. For next week I plan on having an implementation where I can have a live video instead of a pre-recorded one, as well as extract information from the data that I get from openCV (eg: Number of people in frame)

Team Status Report for February 26, 2022

 

An overview of what we did:

  • We presented our design review
  • We received feedback from the review and went over changes as a group
  • Progress was made in the background subtraction for motion / people detection
  • Progress was made for the object detection as well as a potential solution to our SIFT / SURF issue
  • Went through a comprehensive Flask tutorial, researched and planned out how to upload/download from an AWS S3 bucket, researched on how to use an RPI with the ARDUCAM, and researched extra parts that we need to order (camera module and ribbon for RPI to ARDUCAM)
  • We realized we need to order some additional  inventory so we discussed the items and plan to submit an order this week

In terms of what we will do this upcoming week, we hope to do the following:

  • Setting up the backend and frontend of our flask app
  • Sending dummy data from the RPI to a local machine (i.e. since we need to order a camera module, we cannot take pictures yet. Thus, I’ll send over dummy data to get used to using the RPI so that once our camera module comes in, we’ll be good to go)
  • Continued progress on detection for both motion/people and objects

 

The most significant risks that we know of thus far are:

  • The most significant thus far is still the potential that we will have to work with 2 different camera systems considering the scarcity of raspberry pi’s right now
    • We might have to use a laptop, webcam, and do a different control design
  • We are not sure how well the ARDUCAM lenses will work with images (i.e. we believe that it is a fisheye lens and we may need to order another type of lens if that doesn’t work out with object detection)
  • Another potential risk / consideration is integration since we have made progress in detection, but we need to really think through how are system will actually use the detection in an autonomous integrated manner
  • A risk we have in the background subtraction part of our openCV background subtraction part is that, there’s not been a resolution to keeping track of how many people are on the screen. Mitigation plan for this would probably involve getting help from our instructors and professors in CMU and research beyond for figuring out how to extract information from a grayscale image.

There have been no changes to our schedule, but in terms of the design, we noticed that we will most likely need to upload/download our processed images/data to the cloud (i.e. AWS S3 bucket). Originally, uploading information to the cloud was going to be a reach step, but in order for us to connect the processed data from the CV program to the Flask app, we are probably going to need an intermediate cloud storage space. Since we have used AWS before, we will most likely be moving forward with that.

 

We were able to get our background subtraction algorithm to detect the presence of motion in frame.

Foreground data for MOG2 algorithm
Foreground data for KNN algorithm