Team Status Report for 4/27

Risk Mitigation 

Our biggest risk this week is being able to test enough and fine tune our system for accuracy. We want to collect a variety of data in order to make sure our system is robust and we are able to include the relevant information in our final poster and report. We plan on going to test at Salem’s almost every day this week, going at different times (sometimes in the morning to get data for when there are fewer people around, and sometimes at night from around 5-8 PM, where traffic is greater). This way we will have a variety of different tests to show for when it is time to write about testing in the final report. 

Design Changes 

We are exploring another way to measure throughput, instead of using our original camera angle which some cashiers were uncomfortable with, we are looking into a different camera angle (instead of overhead, we are looking into an angle that is on/near the conveyor belt). However, this change is not final yet. 

Schedule Changes 

We have no schedule changes as of this week. 

Tests 

Below are some results from testing the upload and download time when recording different length videos. Interestingly, there is a spike in upload time from 2 to 3 seconds, but the internet that this was tested on (Brian’s) is spotty, so that might have to do with the abrupt increase. Also, upload and download times remain quite stagnant from 5 second to 6 second videos. Based on these results, we figured it would be best to use 2 second videos, as the upload and download time seem to work best for meeting our latency constraints (< 5 seconds to compute a result after initially processing the video feed). It would be possible to go up to 4 second videos, but the spike from 2 to 3 second videos is a deterrent. 

Duration Upload Time (Averaged over 10 trials)  Download Time (Averaged over 10 trials)
1 second 0.61s 0.32 s
2 seconds 0.92 s 0.47 s
3 seconds 1.35 s 0.53 s 
4 seconds 1.52 s 0.55 s 
5 seconds 1.61 s  0.57 s
6 seconds 1.65 s 0.59 s
7 seconds 1.80 s 0.68 s 
8 seconds 1.92 s 0.72 s
9 seconds 2.21 s 0.76 s
10 seconds 2.3 s 0.88 s 

In terms of testing relative fullness, we have tested with various different pictures and footage from Salem’s in order to check the accuracy of the module. From these results, we determined that fullness calculations need a bit of tweaking in order to increase the accuracy. The module performs especially poorly with a very full cart as shown below. We have tried to decrease the Gaussian blur, which has increased the accuracy for these very full carts to a degree, but the calculations are still a bit unreliable, so we will need to adjust accordingly.



Shubhi’s Status Report for 4/20

Achievements

I worked on increasing accuracy on cashier throughput, worked through different approaches and implemented them and test them out for the most accurate approach for our use case. After that, I worked on a module that determined when a person was done with the line, for which I also did a little bit of testing. We also went to Salem’s and I created structures to attach the cameras to so that cameras could be in position for our system around the checkout counters. At Salem’s I also gathered more data to individually test the fullness acquisition module for accuracy.

Progress

Each individual component is working, and our system is also running end to end, but we are running into issues with testing at Salem’s due to concerns employees have been having with the cameras we set up. We are in the process of working something out with Salem’s but it is limiting the amount of testing we can do right now. I think the next big thing is to work on demoing and testing in a self constructed grocery checkout lane setup.

New Experiences and Learning Strategies

During this project I have gained a lot of new skills. One of the biggest ones is learning to train a model for object detection, which I learned from examples and articles online. Another thing that I learned to do was use a SQL database. I never took a class on it, so I relied on documentation and sites like stack overflow to help me achieve the goals I had for the database in the project. A lot of times for these new skills I relied on researching what I wanted to achieve to see if anyone had ever done something similar, and adapt from that to accomplish what I wanted to.

Team Status Report for 4/20

Risk Mitigation 

A risk for us with gathering test data is making sure that we can go at times where there is a decent amount of traffic at Salem’s and we can get a sufficient amount of testing data. We have already went to Salem’s a few times where it wasn’t busy enough to set up three checkout lanes. The way we are mitigating this is by going to test our system near peak hours instead of too early in the morning on a weekday where only 1 or 2 checkout lanes are available. Furthermore, if we do go at a lower traffic time, we want it to be a time where there are at least 2 checkout lanes running because we want to at least have enough traffic to have our system compute the speeds of two different lanes and compare them properly. 

Design Changes 

We haven’t made any changes to our design. 

Schedule Changes 

We haven’t made changes to our schedule as of this week, since we already have a day-to-day schedule. 

Brian’s Status Report for 4/20

Accomplishments

This week, I went to Salem’s multiple times in order to try and get testing data. On Monday, Simon and I went twice (second time with Shubhi as well) to get WiFi on the RPis up and running and debug an error with S3 bucket downloading/uploading. We ran into an issue in the morning with the public WiFi having an “OK” prompt, which effectively blocks connections on a Raspberry Pi. However, Salem’s was kind enough to give us their private network, and so we were able to set up WiFi on all of the Raspberry Pi 4s. In the downtime between our trips to Salem’s, I worked with Simon to make detecting the number of people in a line more robust and to at least work somewhat. All three of us then went on Thursday to actually set up our full system and let it run, but we ran into issues debugging (since we needed to copy code to all of the RPis and then change minute aspects in order to upload different data), and fixing the camera angles, since the RPi camera modules have a very short connector the the RPi itself. Today we went again, but weren’t able to collect data from the full system running because a few of the employees there today were uncomfortable with being recorded. Therefore, we took pictures of people’s shopping carts, and used an empty checkout lane to record videos for testing throughput calculations. I also started working on the final presentation slides, since that’s coming up. 

Progress

In terms of progress, I am now behind the day-to-day schedule that I created by a few days, mostly because setting up the integrated system at Salem’s had so many setbacks preventing us from running the full system for data collection. In terms of what I will be doing this upcoming week to catch up, we need to test our integrated system completely. Therefore, I’ll try to help for finalizing individual component testing, and after that we can try to go on a day where the employees at Salem’s won’t be uncomfortable with their hands being recorded for throughput. 

New Tools

In terms of new tools and technologies that I managed to learn over the course of this semester, I was quite unfamiliar with most of the technologies used in our project. I didn’t know how to use YOLO, I never used AWS S3 buckets, and I didn’t do much programming at all with Raspberry Pis prior to this semester. My method of learning how to use these tools and technologies was to mainly browse the documentation available for most of my needs, and checking various forum posts to debug when needed were vital to learning how to implement various things for this project. Sometimes reading the forum posts led to pitfalls (i.e with the raspberry pis, one of the features for getting addition WiFi connections was only supported on a legacy OS, and so I used up a lot of time trying to implement the wrong solution).

Team Status Report for 4/6

Risk Mitigation 

A risk in our project that we are trying to mitigate at the moment is whether or not our intended platform for storing data will be sufficient in terms of upload/download speed. Since we are planning on uploading camera footage from Salem’s and immediately using it and discarding it in real time, it is necessary to make sure that the platform we choose can be fast enough to allow for this data transfer from RPi to laptop to work properly. Therefore, we will test with uploading data from our RPis to cloud with AWS S3 buckets, and if those do not meet our requirements, then we can pivot to Dropbox or other distributed storage alternatives. 

 

Design Changes

Since we are using RPis to use real-time data from Salem’s, we are using AWS as a middleman in the content delivery process from the store to our remote laptops, which can then use the data for testing purposes on our system. 

Schedule Changes 

We have created a rough draft schedule for the last few weeks of the semester to stay on track: 

 

Brian Shubhi Simon
April 3rd, 2024, Wed Fix environment for cv2.imshow, look into options for RPi data transfer Purchase requests for cameras/displays/RPis RPi setup 
April 4th, 2024, Thursday Fix environment for cv2.imshow/RPi setup Talk to Salem’s about using their Cameras, Wifi, installing our own cameras, reimplement fullness acquisition RPi setup 
April 5th, 2024, Friday RPi setup, set up server/method for data uploading from RPi Reimplement fullness acquisition RPi access camera module and USB camera simultaneously 
April 6th, 2024, Saturday Set up server/method for data uploading from RPi Reimplement fullness acquisition RPi access camera module and USB camera simultaneously 
April 7th, 2024, Sunday Set up server/method for data uploading from RPi, test uploading multiple cameras data per raspberry pi, implement fetching Improve accuracy of fullness acquisition RPi access camera module and USB camera simultaneously, test uploading multiple cameras data per raspberry pi, implement fetching
April 8th, 2024, Monday Go to Salem’s, add pictures to dataset to train for throughput, implement line detection Go to Salem’s, improve accuracy of fullness acquisition Go to Salem’s, implement line detection, add pictures to dataset to train for throughput acquisition
April 9th, 2024, Tuesday Implement line detection, retrain throughput model Improve accuracy of fullness acquisition Implement line detection
April 10th, 2024, Wednesday Implement line detection, retrain throughput model Improve accuracy of fullness acquisition Implement line detection
April 11th, 2024, Thursday Increase accuracy of throughput acquisition, go to Salem’s and install cameras Increase accuracy of throughput acquisition, go to Salem’s and install cameras Increase accuracy of throughput acquisition, go to Salem’s and install cameras
April 12th, 2024, Friday Test throughput, line, fullness acquisition with footage from Salem’s Test throughput, line, fullness acquisition with footage from Salem’s Test throughput, line, fullness acquisition with footage from Salem’s
April 13th, 2024, Saturday Test throughput, line, fullness acquisition with footage from Salem’s Test throughput, line, fullness acquisition with footage from Salem’s Test throughput, line, fullness acquisition with footage from Salem’s
April 14th, 2024, Sunday Test throughput, line acquisition with footage from Salem’s, integrate line detection Test throughput, line acquisition with footage from Salem’s, integrate line detection Test throughput, line acquisition with footage from Salem’s, integrate line detection
April 15th, 2024, Monday Integrate line detection, purchase toy carts, prop items, test integrated system Integrate line detection, test integrated system Integrate line detection, test integrated system
April 16th, 2024, Tuesday Test integrated system Test integrated system Test integrated system
April 17th, 2024, Wednesday Test integrated system Test integrated system Test integrated system
April 18th, 2024, Thursday Figure out test configuration for final demos, dry runs with test configuration  Figure out test configuration for final demos, dry runs with test configuration  Figure out test configuration for final demos, dry runs with test configuration 
April 19th, 2024, Friday Slack Slack Slack
April 20th, 2024, Saturday Slack Slack Slack
April 21st, 2024, Sunday Slack Slack Slack
April 22nd, 2024, Monday, FINAL PRESENTATION
April 23rd, 2024, Tuesday
April 24th, 2024, FINAL PRESENTATION

 

Testing 

Using data from Salem’s market, we plan on validating our system by setting up our system there, letting it run, and observing how frequently it is the case that an individual that uses the system before the next one gets out of the store or to the checkout line faster. We will then tally the total number of users that are able to get to the front of a checkout line or out of the store faster and divide it by the total number of users in order to measure accuracy. Furthermore, for testing the speed of the system, we can programmatically check how fast the system is from start of computation to displaying the result, and check if it is under our design requirements. 

Brian’s Status Report for 4/6

Accomplishments 

For this week, prior to Wednesday, I spent a lot of time integrating the software modules written in order to prepare for our interim demo. I was able to make the main system run for a single counter using test videos from a trip to Salem’s, but I ran into an issue where OpenCV would not display any of the model predicts (with images) due to an environment issue. I was not able to fix this before the interim demo, but I looked into it after and it seems as though cv2.imshow, which is used to display images in the backend is not thread safe, and so using two separate models for cart counting and item counting led to not being able to see visually what was going on. In the meantime, I worked on setting up a raspberry pi, which I was able to do, and also on our method of choice in uploading and fetching image/video data from cameras. We are using an S3 bucket in order to store real time footage captured from our RPis, and then fetching that data and using it to compute results on our laptops. I set that up and have code that works with uploading generic text and python files, but I haven’t been able to get videos or images recorded from the RPi camera module to upload, because my program doesn’t compile when I import the raspberry pi camera module package. 

Here is my code and the S3 bucket after running the code a few times on different files: 

 

I also wrote some code for downloading the files off the S3 bucket into a laptop, which I haven’t tested yet: 

Progress 

We have a new day by day schedule with many tasks so we can finish our project in time. I am somewhat keeping up with it for now and hope to finish code for uploading real time footage to the S3 bucket soon (and fetching). I will need to expand the current code today and tomorrow to make it so that it runs and constantly collects real time footage instead of only recording from the camera module for 3 seconds, and also use USB camera footage as well.

Testing 

In terms of testing, with throughput calculations I plan on using video footage, calculating the time between the cashier beginning and finishing checking out a batch of items, and then manually calculating the throughput by dividing the number of items in that batch by the time elapsed from start to finish, and looking at the difference between that calculation and the actual throughput.

Team Status Report for 3/30

Risk Mitigation 

A notable risk for our project is being able to have testing data. We are combating this risk by using RPis in order to take camera footage and store chunks of it in order to use for testing. Since we have gotten approval to test our project at Salem’s Market, we will fortunately be able to accomplish this. 

 

Design Changes

We are now for each checkout line adding a RPi in order to take the camera footage and store it for usage in testing. 

 

Schedule Changes 

Nothing has changed from last week in terms of our schedule, our Gantt Chart remains unchanged



Brian’s Status Report for 3/30

Accomplishments 

My model finished training, and this last Sunday Simon and I went to Giant Eagle to film some videos for testing purposes. However, running my throughput code on the video made me realize that the model performs very poorly on live video, not even recognizing a single item. I then trained another dataset for 100 epochs, and the results are below: 

  

Even after training the second dataset, the accuracy on live video is quite poor. Giant Eagle also has advertisements and a design on their conveyor belts, which could’ve caused the accuracy to drop. Below there is an image that clearly just shows only one thing being recognized as an object, which is clearly very inaccurate for what we want to do.

Therefore, I started working on edge detection to see if there was a way to calculate throughput and count items that wasn’t so inaccurate. Below are my results from trying to get rid of noise using Gaussian blur and canny. Words are still being shown which makes it seem as though it will be very difficult to continue with this approach as well. 

In terms of interim demo progress, I will pick up a RPi and cameras as they are available and try to work on getting the RPi working with two cameras.

Progress

I am still a bit behind schedule. I will try tomorrow to get the integration on the software side done so that we can have something to demo. As for throughput, if I can’t make significant progress training for more epochs or with edge detection, I will need to pivot to adding pictures of items specifically on conveyor belts from Salem’s Market to the dataset for training.



Team Status Report for 3/23

Risk Mitigation

One of the biggest risks for our project is testing our software modules and making sure they individually work prior to integration. To combat this, we will be going to grocery stores and taking video footage in order to test our modules. OpenCV makes it very easy to read video files and so this allows us to test our individual modules without having to set things up live. For throughput calculations, we plan on taking video footage of store items moving on the checkout line conveyor belts and processing the footage via our code to check if the calculated throughput makes sense. Similarly, we will take videos of carts moving in the grocery store in order to test our other modules when we finish implementing them. 

 

Design Changes

Nothing has changed about our design as of this week, however, we are considering changing what the throughput calculating module inputs into the database: instead of just storing a calculated throughput, we will store item count and time elapsed in order to more easily keep count of a running average. 

 

Schedule Changes 

Nothing has changed about our schedule, our Gantt chart remains the same as before. 



Brian’s Status Report for 3/23

Accomplishments 

This week, I managed to finish training my model after some delay (5000 images in a dataset quickly led to me being booted off Google Colab for hitting GPU usage limits). I trained it for 100 epochs total, but I had to split it into 50 epoch sessions because of GPU usage limits, so I trained for 50 epochs and then trained the resulting model for another 50 epochs. The resulting model seems to have serviceable precision, reaching about 75%-80%, and the validation batches seem to detect grocery items relatively well. 

Now the implementation for the throughput calculation module is complete with the updated module. Simon and I will go to a grocery store tomorrow (Giant Eagle/Aldis), take a video of items moving on a conveyor belt, and test the throughput module. 

Progress

I am still behind schedule, but I am currently working with Simon to fully flesh out the line detection module. This next week, we will try to implement this fully and test it for the interim demo, and start helping out with the relative fullness module since that is very important.