Allen’s Status Report for April 2, 2022

  1. Met with our group, TA, and professor to discuss the status of our project and talked about what to expect for the upcoming interim demo
  2. Added a global data set to the web application as well as a view for new users to pick items they want analyzed by the object detection algorithm

I believe that our progress is behind. With roughly 3 weeks left, I think that we have our work cut out for us. During carnival, I plan to do lots of work and I want our team to start fully integrating and testing our systems with two weeks to go to be safe.

During this upcoming week, I plan to do the following:

  1. Continue researching web sockets as that is a potential bridge that will allow our subsystems to connect with each other (for integration) (besides hosting the web application on the cloud)
  2. Continue making progress on the web application to be ready for integration next week (hopefully)
  3. Discuss what/how to present our interim demo (Sunday night)
  4. Figuring out how to set up our system and shelf/items
    1. i.e. will probably need to buy/find/make a shelf
  5. Assisting Lucky and Takshsheel with anything that they need

Lucky’s Status Report for April 2, 2022

This week I did the following tasks:

  • Completed a testable phase 2 object detection subsystem 
  • Ran Tests with SIFT
    • Lowe’s Ratio Test (all else held constant)
      • I tweaked the lowe’s ratio by increments of 5% from 0% to 100%
    • Match Threshold Test (all else held constant)
      • I tweaked the threshold value by increments of 10 from 0 to 200
    • Combination Test (tweaked Lowe’s Ratio and the match threshold to look for the best accuracy parameters)

 

In terms of schedule, I had to miss the meeting in person on Wednesday of this week, but I worked as soon as I could that day to make up for the missed time. In addition, I was able to get more progress on object detection and did some tests. 

 

What Next

  • Prepare to present for the initial interim demo
  • Detection selection Process
    • Compared the detectors with one another and selected the detector with the best combination scores that also fell within the user latency requirements
  • Phase 3 object detection
    • Integrate my subcomponent with the other’s and the camera module

Team Status Report for March 26, 2022

An overview of what we did:

  • We each attended the ethics seminar in person and that lead us to some design choice thoughts and considerations we had not gotten too on our own
    • The main one was considerations on users customizing which items are available by our system to recognize
      • Different approaches had different security concerns and user requirement concerns i.e. users updating the global data set
  • We had a couple meetings and conversation to go over our design changes and considerations
    • We decided to have a global data set of images we have prescreened and tested for security / ethical issues, and have tester to meet user requirements
      • Then the user’s can select from our global data set
      • In a real world environment, we would institute a system for user’s to request additional items for us to screen and add to the global data set

In terms of what we will do this upcoming week, we hope to do the following:

  • Meet to iron out the specific implementation of our integration components
    • Get the raspberry pi’s running
    • Finish sub systems so we can move to testing

The most significant risks that we know of thus far are:

  • Interconnection between our subsystems
    • We have a more high level idea of how the whole system works, but now we have to programmatically begin to implement them in our code

Changes To our schedule:

  • There are no changes to our integration schedule of late, we are proceeding as planned
  • Takshsheel is changing his schedule where he initially wanted the counter completed by this week which is moving to next week.

Lucky’s Status Report for March 26, 2022

This week I did the following tasks:

  • I continued to work on the detection script
    • I had to make reconsiderations and adjustments to the code I have written / been working on given the concerns explained in the next following section titled “Things I ran into worth considering”
  • Raised my concerns and spoke to my team about the design choices

 

Things I ran into worth considering:

  • User item customization
    • Cons:
      • Can reduce accuracy of system
      • Can institute ethical issues
    • Pros:
      • Enables users to customize to their store needs
      • Faster system that does not go through a large set of products that aren’t even available in store
      • A system that has all of the store’s available products
    • Possible solutions:
      • Allow store owners to capture new images from their live feed i.e. add new items to dataset from live feed
        • Cons:
          • Can add ethical issues
            • Misuse: unsolicited images taken of customers i.e. pervasive / private images
            • If images are added globally
              • Need additional infrastructure to protect against things such as predatory images that intentionally mismatch, inappropriate images, etc.
          • Requires a separate local data storage abstraction / subsystem if images are not uploaded globally
          • Will most likely add complexity that could be avoided with different design choice
          • Can reduce accuracy of system
      • Have a universal set of items we have tested and pre uploaded data on our side controlled by us, then allow local systems to select from those pre-validated items
        • Cons:
          • We may not have all items that a store needs
          • We will have a large data need that may have more items then users even use i.e. what if our customer base is mostly produce, but we have numerous items in home improvement => inefficient data storage
          • We have to add the necessary back end infrastructure and integration with the detection system to handle that => may be something to consider after mvp
        • Pros:
          • No
          • No security concerns of uploaded items / images from users
          • No accuracy concerns of uploaded items / images from users
          • No ethical concerns of uploaded items / images from users

 

In terms of schedule, depending on how testing goes upon completion of this second phase of object detection, I think I should be good on schedule. If testing goes unfavorably, I will have to step up some on my pacing.

 

What’s Next:

  • Now that we have solidified an approach for the item customization, I am going to move forward with those design choices in mind and finish the object detection version that I am working on

Allen’s Status Report for March 26, 2022

  1. Discussed ethical concerns/precautions that we should consider for our project amongst our team and fellow classmates during our ethics lecture
  2. Researched various ways to set up our wireless system – I have concretely decided on what to do and have the resources ready to start up the system besides one part that was ordered tonight (last piece of the system needed to be purchased)
  3. Discussed in depth the details of integration along with making our system more accessible to more stores in general. In addition to other things, one major thing we decided is to add a global item data store for all stores and aisles and allow users to choose aisles/items to view on their UI.
  4. In regards to the previous point, I mapped out how that would look on the web application in terms of code and design

I believe that our progress is a bit behind – we understand that the interim demo is on the horizon in less than 10 days and our final demo is in about a month. That being said, we want to make sure we have those deadlines in mind to finish as early and robustly as possible.

During this upcoming week, I plan to do the following:

  1. Researching web sockets as that is a potential bridge that will allow our subsystems to connect with each other (for integration) (besides hosting the web application on the cloud)
  2. Adding point #3 above for the web application
  3. Finishing final parts of the web application (i.e. some UI and routing for the page that will show the data to the user)
  4. When the last piece of the wireless system is here by mid-week, I’ll shift my focus to fleshing out the wireless system (the first half of the week will be web application dominant)
  5. Figuring out how to set up our system and shelf/items for our interim demo
  6. Assisting Lucky and Takshsheel with anything that they need

Lucky’s Status Report for March 19, 2022

This week I did the following tasks:

  • I finished producing a functional SIFT matching script that detected matches and produced results between some test images
  • I ported all of the object detection code I’ve been working on to a version control system to prep for collaboration and integration
  • I began a new python file/script to run continuously on a system in order to prepare something that can function in production
    • I cleaned up the code, and modularized sections for readability and testing of subcomponents
      • This included modularizing the SIFt detection portion so we can more easily swap out the detection algorithm for other algorithms for benchmarking the best algorithm for our use case
      • I organized global variables for making the tweaking of threshold / customizable portions of the code easily accessible for benchmarking and optimization

 

In terms of schedule, it felt like I finally began catching up on the slack I had built up on weeks prior. I now have a clearer trajectory of what I want to get done soon

 

What Next

  • I want to finalize the transition to an almost production ready subsystem of the object detection algorithm for our MVP
    • This entails finishing a functional main script that runs the detection and matching continuously with prompts from the terminal
      • The terminal prompts will service as a trigger mechanism to simulate the trigger from the motion detection subsystem
      • There is als a capture simulation to simulate capturing new images from the shelf camera for detection / matching

Team Status Report for March 19, 2022

An overview of what we did:

  • Wrote our Ethics assignments and discussed what we wrote amongst each other
  • Received feedback on the design report review and have noted those changes to be implemented for our final version of the report
  • Progress was made in the background subtraction for motion / people detection
  • Progress was made for the object detection as well as a potential solution to our SIFT / SURF issue
  • Ordered and received the remaining parts that we need for one complete wireless system (we were missing a SD card and a camera tripod)
  • Completed essentially 80% of the web application before spring break

In terms of what we will do this upcoming week, we hope to do the following:

  • Determine how our set-up will look for the interim demo (i.e. how are we going to set up the shelf?)
  • Set up the wireless system and attempt to send dummy data to a local machine (i.e. one of our laptops)
  • Wrapping up everything on the web application that doesn’t require the input data from the detection algorithms (i.e. designing the UI for the view that shows the data to the user, linking some more URL routes, etc.)
  • Continued progress on detection for both motion/people and objects

The most significant risks that we know of thus far are:

  • The most significant thus far is still the potential that we will have to work with 2 different camera systems considering the scarcity of raspberry pi’s right now
    • We might have to use a laptop, webcam, and do a different control design
  • We are not sure how well the ARDUCAM lenses will work with images (i.e. we believe that it is a fisheye lens and we may need to order another type of lens if that doesn’t work out with object detection)
    • We will determine how well the current fisheye lens works with object detection this week.
  • Another potential risk / consideration is integration since we have made progress in detection, but we need to really think through how are system will actually use the detection in an autonomous integrated manner
  • A risk we have in the background subtraction part of our openCV background subtraction part is that, there’s not been a resolution to keeping track of how many people are on the screen. Mitigation plan for this would probably involve getting help from our instructors and professors in CMU and research beyond for figuring out how to extract information from a grayscale image.

There have been no changes to our schedule, but in terms of the design, we realized through research in setting up our wireless system that we will need a SD card to run the RPI and the camera will need to be propped up with a tripod. As a result, we ordered both those parts this week and they have recently arrived. We will be setting up the system this week.

Allen’s Status Report for March 19, 2022

  1. Worked on RPI design/research/ordering more parts and the Ethics assignment on Monday and Wednesday
    1. ~5 hours
  2. During the week before spring break, I spent roughly 18-20 hours setting up the web application (the login/register pages are complete with user authentication, the homepage is set up, and the UI is nearly finished) and writing up my portions of the design review document.

I believe that our progress is slightly behind – we got a lot of work done before spring break, but we’ve had some slack time since break. The necessary parts for the wireless transfer system arrived during the tail end of the week (as I research more, I realize how we need little things along the way, such as: SD card, tripod for camera, etc.), so I’ll be focusing on that this week.

During this upcoming week, I plan to do the following:

  1. Setting up the wireless transfer system now that all of our parts for one system are here (our other RPI is hopefully coming during the first week of April. I say hopefully because it might get backordered, but the expected date of arrival is during the first week of April.)
  2. Finishing final parts of the web application (i.e. some UI and routing for the page that will show the data to the user)
  3. Figuring out how to set up our system and shelf/items for our interim demo
  4. Assisting Lucky and Takshsheel with anything that they need

Lucky’s Status Report for February 26, 2022

This week I did the following tasks:

  • I finished a BRISK algorithm version
    • Last week I ran into an error using SURF and SIFT in which open cv kept giving runtime errors stating that they were patent protected / copyrighted, so I proceeded with BRISK
    • Finished the BRISK implementation that extracted features from images of cereal and compared it to images containing cereal
      • I tried to tweak the filtering algorithm I used – Lowe’s ratio Test – and some other parameters to see if I could detect one cereal box from a group of cereal boxes, but the process did not seem to be productive
        • I plan to instead have the system detect if a cereal is present rather than for example honey nut cheerios vs regular cheerios for now, then branch to more detailed filtering in the future if possible since it is not part of our MVP
  • Attended the presentation, asked questions and took notes
    • One important take away was that another group actually was able to use SIFT
      • We had previously pivoted from SIFT to BRISK because when I tried to use SIFT, open CV kept giving me an error stating that SIFT is a patented / copyrighted software so I could not use it, but the other team said they were able to use it
    • I asked for their requirements file because I suspect that the patent issue can be avoided by using older versions of opencv and/or python
      • I began attempting to develop a similar detection algorithm with SIFT as I did with BRISK

 

In terms of schedule, it felt like I did more work than weeks prior, but that could be because I began shifting from research to implementation. As always, I have room to improve on my time management, but I do not feel any more behind than last week.

 

The main way I intend to catch up and do better this upcoming week is the following

  • Finish a comparable SIFT detection as I did with BRISK
  • Begin designing and implementing a process to actually use the detection for a pool of images to simulate the process of the algorithm looking for multiple items rather than fixed test items / images

Allen’s Status Report for February 26, 2022

During this week, I worked on the following (in chronological order):

  1. Worked on Design presentation with my team during Sunday night
    1. ~2-3 hours
  2. Reviewed/conducted peer reviews for the other teams during class time on Monday and Wednesday
    1. ~3 hours
  3. Went through a comprehensive Flask tutorial, researched and planned out how to upload/download from an AWS S3 bucket, researched on how to use an RPI with the ARDUCAM, and researched extra parts that we need to order (camera module and ribbon for RPI to ARDUCAM)
    1. ~5 hours

In the past week, I’ve had two exams so my output this week was not as optimal as I would have liked it to be. This upcoming week should be much more free, allowing me to put in 12+ hours for sure.

I believe that our progress is behind, and I shoulder my fair share of responsibility. The key hindering factor for me (besides having midterms the past two weeks), is that I tend to mostly work on the tail ends of the week. This week, I will work for at least 3/5 of the weekdays for 2+ hours outside of class time. My goal output this upcoming week is 15+ hours.

During the next week, I plan to do the following:

  1. Setting up the backend and frontend of our flask app
  2. Sending dummy data from the RPI to a local machine (i.e. since we need to order a camera module, we cannot take pictures yet. Thus, I’ll send over dummy data to get used to using the RPI so that once our camera module comes in, we’ll be good to go)
  3. Assisting Lucky and/or Takshsheel on object/human detection