Shubhi’s Status Report for 4/20

Achievements

I worked on increasing accuracy on cashier throughput, worked through different approaches and implemented them and test them out for the most accurate approach for our use case. After that, I worked on a module that determined when a person was done with the line, for which I also did a little bit of testing. We also went to Salem’s and I created structures to attach the cameras to so that cameras could be in position for our system around the checkout counters. At Salem’s I also gathered more data to individually test the fullness acquisition module for accuracy.

Progress

Each individual component is working, and our system is also running end to end, but we are running into issues with testing at Salem’s due to concerns employees have been having with the cameras we set up. We are in the process of working something out with Salem’s but it is limiting the amount of testing we can do right now. I think the next big thing is to work on demoing and testing in a self constructed grocery checkout lane setup.

New Experiences and Learning Strategies

During this project I have gained a lot of new skills. One of the biggest ones is learning to train a model for object detection, which I learned from examples and articles online. Another thing that I learned to do was use a SQL database. I never took a class on it, so I relied on documentation and sites like stack overflow to help me achieve the goals I had for the database in the project. A lot of times for these new skills I relied on researching what I wanted to achieve to see if anyone had ever done something similar, and adapt from that to accomplish what I wanted to.

Simon’s Status Report for 4/20

Accomplishments

In the past 2 weeks, Brian and I collaborated together for most of the work. To start, Brian and I managed to get uploading from our Raspberry Pi’s to S3 working, and then we were able to download as well. After that, we came up with a really simple synchronization method, which was just uploading a .txt file to indicate an upload was complete and then downloading the .txt file frequently to check if an upload had occurred (since the file is very small, there’s negligible cost to doing this).

Brian and I then worked on line detection, in which we used YOLOv8’s pretrained model for detecting people, using the bounding boxes to keep track of their location, and then determined whether people should be considered in line by distance and angle between them. To make it more robust, we tried to add functionality to only count people as in the line if they’ve stood still for a few seconds, which should eliminate people just passing by. However, we weren’t able to get this to work reliably. I was thinking of implementing pose detection instead, so we could eliminate counting passersby by checking whether they are facing forward (in line) or facing to the side (passing by), but we will do this next week if we have the chance.

Lastly, we spent some time testing in Salem’s this week. We spent Monday trying to set up and resolve Wi-Fi issues between our RPi’s and Salem’s. We went back on Thursday and managed to get some good footage using an actual checkout lane to test our individual components. We also went earlier today, but were unfortunately not allowed to test this time.

Progress

The main issue is that we didn’t get as much testing done as I would have liked (and also not enough to create much of a presentation). I’m planning to go back to Salem’s tomorrow and try again at a better time for them. Less importantly, I was going to try and implement pose detection to improve our line detection.

New Knowledge

Since I haven’t done anything with computer vision previously, I had to learn how to use OpenCV, which I accomplished by completing a short boot camp from their website. Afterwards, I mostly used YOLOv8’s documentation along with a few articles I found online to learn how to train a model and run inference with it. Afterwards, I used some more online articles to figure out how to set up a headless RPi. For accessing the camera module, I used the picamera2 documentation and some of the examples that I found inside. Lastly, I had to use forums quite frequently when I ran into issues setting up (stuff like how to download packages and change the WiFi settings).

Team Status Report for 4/20

Risk Mitigation 

A risk for us with gathering test data is making sure that we can go at times where there is a decent amount of traffic at Salem’s and we can get a sufficient amount of testing data. We have already went to Salem’s a few times where it wasn’t busy enough to set up three checkout lanes. The way we are mitigating this is by going to test our system near peak hours instead of too early in the morning on a weekday where only 1 or 2 checkout lanes are available. Furthermore, if we do go at a lower traffic time, we want it to be a time where there are at least 2 checkout lanes running because we want to at least have enough traffic to have our system compute the speeds of two different lanes and compare them properly. 

Design Changes 

We haven’t made any changes to our design. 

Schedule Changes 

We haven’t made changes to our schedule as of this week, since we already have a day-to-day schedule. 

Brian’s Status Report for 4/20

Accomplishments

This week, I went to Salem’s multiple times in order to try and get testing data. On Monday, Simon and I went twice (second time with Shubhi as well) to get WiFi on the RPis up and running and debug an error with S3 bucket downloading/uploading. We ran into an issue in the morning with the public WiFi having an “OK” prompt, which effectively blocks connections on a Raspberry Pi. However, Salem’s was kind enough to give us their private network, and so we were able to set up WiFi on all of the Raspberry Pi 4s. In the downtime between our trips to Salem’s, I worked with Simon to make detecting the number of people in a line more robust and to at least work somewhat. All three of us then went on Thursday to actually set up our full system and let it run, but we ran into issues debugging (since we needed to copy code to all of the RPis and then change minute aspects in order to upload different data), and fixing the camera angles, since the RPi camera modules have a very short connector the the RPi itself. Today we went again, but weren’t able to collect data from the full system running because a few of the employees there today were uncomfortable with being recorded. Therefore, we took pictures of people’s shopping carts, and used an empty checkout lane to record videos for testing throughput calculations. I also started working on the final presentation slides, since that’s coming up. 

Progress

In terms of progress, I am now behind the day-to-day schedule that I created by a few days, mostly because setting up the integrated system at Salem’s had so many setbacks preventing us from running the full system for data collection. In terms of what I will be doing this upcoming week to catch up, we need to test our integrated system completely. Therefore, I’ll try to help for finalizing individual component testing, and after that we can try to go on a day where the employees at Salem’s won’t be uncomfortable with their hands being recorded for throughput. 

New Tools

In terms of new tools and technologies that I managed to learn over the course of this semester, I was quite unfamiliar with most of the technologies used in our project. I didn’t know how to use YOLO, I never used AWS S3 buckets, and I didn’t do much programming at all with Raspberry Pis prior to this semester. My method of learning how to use these tools and technologies was to mainly browse the documentation available for most of my needs, and checking various forum posts to debug when needed were vital to learning how to implement various things for this project. Sometimes reading the forum posts led to pitfalls (i.e with the raspberry pis, one of the features for getting addition WiFi connections was only supported on a legacy OS, and so I used up a lot of time trying to implement the wrong solution).

Simon’s Status Report for 4/6

Accomplishments

On Wednesday, I flashed the Raspberry Pi OS onto a microSD card and set up it up as a headless RPi. On Thursday, I attached the RPi camera module and made sure that I could record and save a video using the picamera2 module (took me unnecessarily long to realize that I needed picamera2 and not picamera). On Friday, I also made sure that I could record and save a video from the USB camera. On Saturday, I got the camera module and the USB camera to capture and save videos simultaneously, so Brian and I can put our work together tomorrow and see if we can record and then upload/download from S3 successfully. I also spent some time trying to stream from my RPi to a website, but I couldn’t figure out why I wasn’t able to connect, so I think I’ll just scrap that approach (Plus, it’s probably a bad idea since anyone who can view the website can view all the footage).

Here’s the code with the camera module and USB camera recording simultaneously:

Progress

I’m keeping up with the new day-by-day schedule that Brian created, but I was hoping to be further ahead of schedule by now because the RPi stuff should not have taken as long as it did. For next week, it seems like it should be easy enough to put Brian and I’s work together tomorrow, and from there I will just continue following the schedule outlined in Brian’s status report.

Testing

For what I’ve worked on/plan to finish, which are shopping cart detection and line detection, I could test by going to Salem’s and setting up a camera in the checkout lines for 10 minutes each (since there are 6 checkout lines, this will total an hour or so) and just mark down what percentage of the shopping carts are correctly detected (correctly meaning detected by the time they enter the bounding box). For line detection, I will look at how many people are in the line from the camera footage and see if the correct number of people are detected as being in the line and keep track of how many people we are off by.

Team Status Report for 4/6

Risk Mitigation 

A risk in our project that we are trying to mitigate at the moment is whether or not our intended platform for storing data will be sufficient in terms of upload/download speed. Since we are planning on uploading camera footage from Salem’s and immediately using it and discarding it in real time, it is necessary to make sure that the platform we choose can be fast enough to allow for this data transfer from RPi to laptop to work properly. Therefore, we will test with uploading data from our RPis to cloud with AWS S3 buckets, and if those do not meet our requirements, then we can pivot to Dropbox or other distributed storage alternatives. 

 

Design Changes

Since we are using RPis to use real-time data from Salem’s, we are using AWS as a middleman in the content delivery process from the store to our remote laptops, which can then use the data for testing purposes on our system. 

Schedule Changes 

We have created a rough draft schedule for the last few weeks of the semester to stay on track: 

 

Brian Shubhi Simon
April 3rd, 2024, Wed Fix environment for cv2.imshow, look into options for RPi data transfer Purchase requests for cameras/displays/RPis RPi setup 
April 4th, 2024, Thursday Fix environment for cv2.imshow/RPi setup Talk to Salem’s about using their Cameras, Wifi, installing our own cameras, reimplement fullness acquisition RPi setup 
April 5th, 2024, Friday RPi setup, set up server/method for data uploading from RPi Reimplement fullness acquisition RPi access camera module and USB camera simultaneously 
April 6th, 2024, Saturday Set up server/method for data uploading from RPi Reimplement fullness acquisition RPi access camera module and USB camera simultaneously 
April 7th, 2024, Sunday Set up server/method for data uploading from RPi, test uploading multiple cameras data per raspberry pi, implement fetching Improve accuracy of fullness acquisition RPi access camera module and USB camera simultaneously, test uploading multiple cameras data per raspberry pi, implement fetching
April 8th, 2024, Monday Go to Salem’s, add pictures to dataset to train for throughput, implement line detection Go to Salem’s, improve accuracy of fullness acquisition Go to Salem’s, implement line detection, add pictures to dataset to train for throughput acquisition
April 9th, 2024, Tuesday Implement line detection, retrain throughput model Improve accuracy of fullness acquisition Implement line detection
April 10th, 2024, Wednesday Implement line detection, retrain throughput model Improve accuracy of fullness acquisition Implement line detection
April 11th, 2024, Thursday Increase accuracy of throughput acquisition, go to Salem’s and install cameras Increase accuracy of throughput acquisition, go to Salem’s and install cameras Increase accuracy of throughput acquisition, go to Salem’s and install cameras
April 12th, 2024, Friday Test throughput, line, fullness acquisition with footage from Salem’s Test throughput, line, fullness acquisition with footage from Salem’s Test throughput, line, fullness acquisition with footage from Salem’s
April 13th, 2024, Saturday Test throughput, line, fullness acquisition with footage from Salem’s Test throughput, line, fullness acquisition with footage from Salem’s Test throughput, line, fullness acquisition with footage from Salem’s
April 14th, 2024, Sunday Test throughput, line acquisition with footage from Salem’s, integrate line detection Test throughput, line acquisition with footage from Salem’s, integrate line detection Test throughput, line acquisition with footage from Salem’s, integrate line detection
April 15th, 2024, Monday Integrate line detection, purchase toy carts, prop items, test integrated system Integrate line detection, test integrated system Integrate line detection, test integrated system
April 16th, 2024, Tuesday Test integrated system Test integrated system Test integrated system
April 17th, 2024, Wednesday Test integrated system Test integrated system Test integrated system
April 18th, 2024, Thursday Figure out test configuration for final demos, dry runs with test configuration  Figure out test configuration for final demos, dry runs with test configuration  Figure out test configuration for final demos, dry runs with test configuration 
April 19th, 2024, Friday Slack Slack Slack
April 20th, 2024, Saturday Slack Slack Slack
April 21st, 2024, Sunday Slack Slack Slack
April 22nd, 2024, Monday, FINAL PRESENTATION
April 23rd, 2024, Tuesday
April 24th, 2024, FINAL PRESENTATION

 

Testing 

Using data from Salem’s market, we plan on validating our system by setting up our system there, letting it run, and observing how frequently it is the case that an individual that uses the system before the next one gets out of the store or to the checkout line faster. We will then tally the total number of users that are able to get to the front of a checkout line or out of the store faster and divide it by the total number of users in order to measure accuracy. Furthermore, for testing the speed of the system, we can programmatically check how fast the system is from start of computation to displaying the result, and check if it is under our design requirements. 

Brian’s Status Report for 4/6

Accomplishments 

For this week, prior to Wednesday, I spent a lot of time integrating the software modules written in order to prepare for our interim demo. I was able to make the main system run for a single counter using test videos from a trip to Salem’s, but I ran into an issue where OpenCV would not display any of the model predicts (with images) due to an environment issue. I was not able to fix this before the interim demo, but I looked into it after and it seems as though cv2.imshow, which is used to display images in the backend is not thread safe, and so using two separate models for cart counting and item counting led to not being able to see visually what was going on. In the meantime, I worked on setting up a raspberry pi, which I was able to do, and also on our method of choice in uploading and fetching image/video data from cameras. We are using an S3 bucket in order to store real time footage captured from our RPis, and then fetching that data and using it to compute results on our laptops. I set that up and have code that works with uploading generic text and python files, but I haven’t been able to get videos or images recorded from the RPi camera module to upload, because my program doesn’t compile when I import the raspberry pi camera module package. 

Here is my code and the S3 bucket after running the code a few times on different files: 

 

I also wrote some code for downloading the files off the S3 bucket into a laptop, which I haven’t tested yet: 

Progress 

We have a new day by day schedule with many tasks so we can finish our project in time. I am somewhat keeping up with it for now and hope to finish code for uploading real time footage to the S3 bucket soon (and fetching). I will need to expand the current code today and tomorrow to make it so that it runs and constantly collects real time footage instead of only recording from the camera module for 3 seconds, and also use USB camera footage as well.

Testing 

In terms of testing, with throughput calculations I plan on using video footage, calculating the time between the cashier beginning and finishing checking out a batch of items, and then manually calculating the throughput by dividing the number of items in that batch by the time elapsed from start to finish, and looking at the difference between that calculation and the actual throughput.

Shubhi’s Status Report for 4/6

Achievements

We went to Scotty’s Market and gathered more footage data for developing and testing purposes at Salem’s Market. We also finished integrating all the existing modules to be able to provide an output based on some footage we currently have. I also finished redoing the fullness acquisition module to use edge detection and improve accuracy for cart fullness. I used edge detection and contours to figure out the area occupied by the items within in the cart, and then after playing with the numbers I worked on increasing its accuracy. I also have been talking to Salem’s about getting permission to install our system in the market, and he brought up concerns about it impacting his business and some safety concerns, for which he wants to meet in person to talk about. I also got permission from him to borrow a few carts from Scotty’s Market for our in person demo, so we will be able to set up a realistic system for our demo in person as well.

Progress

After talking to Salem’s market, I need to schedule a meeting with the owner in person, and he requested that our advisor come as well, due to his concerns about the way this system will impact his business. His biggest concern is how cameras make people of minority groups feel, as well as if he were to give access to his pre-installed cameras, whether the security would be compromised. I hope to resolve this issue in the following week so that we can move on with more testing, but in the meantime, I am working on solidifying the modules to be more accurate, and I think based on our new daily plan we are on track.

Team Status Report for 3/30

Risk Mitigation 

A notable risk for our project is being able to have testing data. We are combating this risk by using RPis in order to take camera footage and store chunks of it in order to use for testing. Since we have gotten approval to test our project at Salem’s Market, we will fortunately be able to accomplish this. 

 

Design Changes

We are now for each checkout line adding a RPi in order to take the camera footage and store it for usage in testing. 

 

Schedule Changes 

Nothing has changed from last week in terms of our schedule, our Gantt Chart remains unchanged



Brian’s Status Report for 3/30

Accomplishments 

My model finished training, and this last Sunday Simon and I went to Giant Eagle to film some videos for testing purposes. However, running my throughput code on the video made me realize that the model performs very poorly on live video, not even recognizing a single item. I then trained another dataset for 100 epochs, and the results are below: 

  

Even after training the second dataset, the accuracy on live video is quite poor. Giant Eagle also has advertisements and a design on their conveyor belts, which could’ve caused the accuracy to drop. Below there is an image that clearly just shows only one thing being recognized as an object, which is clearly very inaccurate for what we want to do.

Therefore, I started working on edge detection to see if there was a way to calculate throughput and count items that wasn’t so inaccurate. Below are my results from trying to get rid of noise using Gaussian blur and canny. Words are still being shown which makes it seem as though it will be very difficult to continue with this approach as well. 

In terms of interim demo progress, I will pick up a RPi and cameras as they are available and try to work on getting the RPi working with two cameras.

Progress

I am still a bit behind schedule. I will try tomorrow to get the integration on the software side done so that we can have something to demo. As for throughput, if I can’t make significant progress training for more epochs or with edge detection, I will need to pivot to adding pictures of items specifically on conveyor belts from Salem’s Market to the dataset for training.