Brian’s Status Report for 4/6

Accomplishments 

For this week, prior to Wednesday, I spent a lot of time integrating the software modules written in order to prepare for our interim demo. I was able to make the main system run for a single counter using test videos from a trip to Salem’s, but I ran into an issue where OpenCV would not display any of the model predicts (with images) due to an environment issue. I was not able to fix this before the interim demo, but I looked into it after and it seems as though cv2.imshow, which is used to display images in the backend is not thread safe, and so using two separate models for cart counting and item counting led to not being able to see visually what was going on. In the meantime, I worked on setting up a raspberry pi, which I was able to do, and also on our method of choice in uploading and fetching image/video data from cameras. We are using an S3 bucket in order to store real time footage captured from our RPis, and then fetching that data and using it to compute results on our laptops. I set that up and have code that works with uploading generic text and python files, but I haven’t been able to get videos or images recorded from the RPi camera module to upload, because my program doesn’t compile when I import the raspberry pi camera module package. 

Here is my code and the S3 bucket after running the code a few times on different files: 

 

I also wrote some code for downloading the files off the S3 bucket into a laptop, which I haven’t tested yet: 

Progress 

We have a new day by day schedule with many tasks so we can finish our project in time. I am somewhat keeping up with it for now and hope to finish code for uploading real time footage to the S3 bucket soon (and fetching). I will need to expand the current code today and tomorrow to make it so that it runs and constantly collects real time footage instead of only recording from the camera module for 3 seconds, and also use USB camera footage as well.

Testing 

In terms of testing, with throughput calculations I plan on using video footage, calculating the time between the cashier beginning and finishing checking out a batch of items, and then manually calculating the throughput by dividing the number of items in that batch by the time elapsed from start to finish, and looking at the difference between that calculation and the actual throughput.

Shubhi’s Status Report for 4/6

Achievements

We went to Scotty’s Market and gathered more footage data for developing and testing purposes at Salem’s Market. We also finished integrating all the existing modules to be able to provide an output based on some footage we currently have. I also finished redoing the fullness acquisition module to use edge detection and improve accuracy for cart fullness. I used edge detection and contours to figure out the area occupied by the items within in the cart, and then after playing with the numbers I worked on increasing its accuracy. I also have been talking to Salem’s about getting permission to install our system in the market, and he brought up concerns about it impacting his business and some safety concerns, for which he wants to meet in person to talk about. I also got permission from him to borrow a few carts from Scotty’s Market for our in person demo, so we will be able to set up a realistic system for our demo in person as well.

Progress

After talking to Salem’s market, I need to schedule a meeting with the owner in person, and he requested that our advisor come as well, due to his concerns about the way this system will impact his business. His biggest concern is how cameras make people of minority groups feel, as well as if he were to give access to his pre-installed cameras, whether the security would be compromised. I hope to resolve this issue in the following week so that we can move on with more testing, but in the meantime, I am working on solidifying the modules to be more accurate, and I think based on our new daily plan we are on track.

Team Status Report for 3/30

Risk Mitigation 

A notable risk for our project is being able to have testing data. We are combating this risk by using RPis in order to take camera footage and store chunks of it in order to use for testing. Since we have gotten approval to test our project at Salem’s Market, we will fortunately be able to accomplish this. 

 

Design Changes

We are now for each checkout line adding a RPi in order to take the camera footage and store it for usage in testing. 

 

Schedule Changes 

Nothing has changed from last week in terms of our schedule, our Gantt Chart remains unchanged



Brian’s Status Report for 3/30

Accomplishments 

My model finished training, and this last Sunday Simon and I went to Giant Eagle to film some videos for testing purposes. However, running my throughput code on the video made me realize that the model performs very poorly on live video, not even recognizing a single item. I then trained another dataset for 100 epochs, and the results are below: 

  

Even after training the second dataset, the accuracy on live video is quite poor. Giant Eagle also has advertisements and a design on their conveyor belts, which could’ve caused the accuracy to drop. Below there is an image that clearly just shows only one thing being recognized as an object, which is clearly very inaccurate for what we want to do.

Therefore, I started working on edge detection to see if there was a way to calculate throughput and count items that wasn’t so inaccurate. Below are my results from trying to get rid of noise using Gaussian blur and canny. Words are still being shown which makes it seem as though it will be very difficult to continue with this approach as well. 

In terms of interim demo progress, I will pick up a RPi and cameras as they are available and try to work on getting the RPi working with two cameras.

Progress

I am still a bit behind schedule. I will try tomorrow to get the integration on the software side done so that we can have something to demo. As for throughput, if I can’t make significant progress training for more epochs or with edge detection, I will need to pivot to adding pictures of items specifically on conveyor belts from Salem’s Market to the dataset for training.



Simon’s Status Report for 03/30/24

Accomplishments

At the start of the week, Brian and I took videos at Giant Eagle to test our models, and my model for the shopping cart detection works reasonably well on the footage that we have. However, I realized that the accuracy would probably be improved by putting the camera higher up than originally planned.

To improve the accuracy, I also tried to split the larger video into two 640 x 640 videos because the model is trained on image sizes of 640×640, but I couldn’t get the model to predict on two videos without some kind of error despite running the process in separate threads. I don’t think it will be necessary for the interim demo, and there might be better ways to improve accuracy (such as the higher camera angle and simply collecting more data from Salem’s Market, where we plan to implement our system), so I will just put this aside for now and just run inference on the original large video instead.

Lastly, I changed the method for counting carts from a static bounding box that counts when carts enter/leave. Instead, I use a static bounding box to find the first cart, and then look for the next cart in a small region after the first cart, and so on and so forth until the last cart, which should minimize errors due to carts that are passing by the line but not entering.

Progress

I plan to collect some real data from Salem’s Market this week and retraining my model for higher accuracy. If there is any component that’s missing for us to have a complete end-to-end system, I will also work on that (such as if our throughput calculations aren’t quite done).

Shubhi’s Status Report for 3/23

Achievements

We got confirmation from Salem’s Market to test the system – they have 6 checkout lanes but we will only be testing out 3. Currently in the process of getting CMU approval to test at Scotty’s but no answer yet, due to legality concerns. Implemented relative fullness detection, but need to still test it out.

Progress

Since we don’t have Scotty’s approval but we have Salem’s Market approval, we will be testing at Salem’s in strip district, but a little far so not the most optimal, and will take more time. We need to still integrate all components of the system, but that should only take a few days and then we can use the rest of the week to test out the system, in time for the the demo.

Simon’s Status Report for 3/23/2024

Accomplishments

Realized I had to retrain model because I accidentally set the image size to 64×64 in the original, which made it so that the model would fail to make accurate predictions on images with higher resolution. Scaling down images/video to 64×64 worked, but with lower confidence on predictions than I would have liked. With a new model trained with image size of 640×640, the model works much better and with higher resolution images and predicts correctly with higher confidence than the scaled down images for previous models.

With the new model, I used YOLOv8’s object counting to count the number of shopping carts in a certain region of an image. I’ve tested it with stock footage to verify that it works on an extremely simple case, and Brian and I will go to grocery stores tomorrow to take video footage from our desired camera angle so that we can see if it works correctly.

Progress

I’m hoping I’m close to being done with shopping cart detection (need to verify that it works and make changes accordingly), but if all goes well then I should be somewhat caught up. For next week, I will try to make sure the shopping cart detection is working and once I’m confident that it is, I can help Shubhi on relative fullness detection.

 

Team Status Report for 3/23

Risk Mitigation

One of the biggest risks for our project is testing our software modules and making sure they individually work prior to integration. To combat this, we will be going to grocery stores and taking video footage in order to test our modules. OpenCV makes it very easy to read video files and so this allows us to test our individual modules without having to set things up live. For throughput calculations, we plan on taking video footage of store items moving on the checkout line conveyor belts and processing the footage via our code to check if the calculated throughput makes sense. Similarly, we will take videos of carts moving in the grocery store in order to test our other modules when we finish implementing them. 

 

Design Changes

Nothing has changed about our design as of this week, however, we are considering changing what the throughput calculating module inputs into the database: instead of just storing a calculated throughput, we will store item count and time elapsed in order to more easily keep count of a running average. 

 

Schedule Changes 

Nothing has changed about our schedule, our Gantt chart remains the same as before. 



Brian’s Status Report for 3/23

Accomplishments 

This week, I managed to finish training my model after some delay (5000 images in a dataset quickly led to me being booted off Google Colab for hitting GPU usage limits). I trained it for 100 epochs total, but I had to split it into 50 epoch sessions because of GPU usage limits, so I trained for 50 epochs and then trained the resulting model for another 50 epochs. The resulting model seems to have serviceable precision, reaching about 75%-80%, and the validation batches seem to detect grocery items relatively well. 

Now the implementation for the throughput calculation module is complete with the updated module. Simon and I will go to a grocery store tomorrow (Giant Eagle/Aldis), take a video of items moving on a conveyor belt, and test the throughput module. 

Progress

I am still behind schedule, but I am currently working with Simon to fully flesh out the line detection module. This next week, we will try to implement this fully and test it for the interim demo, and start helping out with the relative fullness module since that is very important. 



Team Status Report for 3/16

Risk Mitigation

Now that we are training several models to go through with our CV algorithms, we realized that due to time constraints, it is imperative that we get this done as soon as possible. To do this, we are using YOLOv8, which is very user friendly, and we are training models from datasets that are publicly available whenever possible. For example, we are training models from datasets of shopping carts that are publicly available and of datasets of grocery store items that are publicly available in order to save time. 

 

Design Changes

Nothing has changed about our design. We are still implementing throughput calculations for cashiers, relative fullness calculations for carts, and detecting how many people are lined up in the same manner as was determined before spring break in our design report. 

 

Schedule Changes

Our Gantt chart remains unchanged from before.