Lucky’s Status Report for February 19, 2022

This week I did the following tasks:

  • Attended the mandatory meeting, asked questions and took notes
    • One important take away was understanding the nature of the course as more of one where we build a recip, testing which ingredients work best where, rather than all the raw ingredients
    • Another takeaway was to consider the scarcity of raspberry pi’s and beginning to think of alternatives
  • Began some actual code for the detection algorithms
    • Created a jupyter file on Google Colab to test SIFT, SURF, BRISK, color detection
    • Wrote some code and read through various docs
    • Learned about matching and feature extraction processes
      • Introduced to FLANN and Brute Force
    • Learned more about ways to speed up detection
      • In addition to gray scale, learned about converting to HSV
        • This enables one channel (H – Hue) for the colors, I’m thinking this could speed up color detection

 

In terms of schedule, I did some more work, but not as much as I anticipated with a couple hiccups like finding that SIFT / SURF are patented algorithms. Also, I need to improve on my time management as well if I am honest.

 

The main way I intend to catch up and do better this upcoming week is the following

  • Continue testing the open source options such as BRISK
  • See if we can find a workaround i.e. use older versions of open cv / python to implement SIFT
  • Break down my schedule to a more micro scale for the week and have smaller milestones to try to manage my time better and avoid going down rabbit trails of non-essential research / information 

 

This next week, I hope to complete a testable version of BRISK object detection for various grocery items. I hope to also begin shifting to interconnectivity requirements (i.e. raspberry pi, camera, computer, cloud) and the location of processing as we become more familiar and have tested some algorithms

Allen’s Status Report for February 19, 2022

During this week, I mainly researched Web Frameworks, Databases, Object-Detection Algorithms, met with my team during class time, discussed through messenger when we were home, and created an overall (and potentially near final draft) design diagram of our product.

After doing research, I have decided that we will use Flask rather than Django for our web framework. Why? Even though I am familiar with Django and haven’t used Flask before, Django is very comprehensive and for the purposes of our web application, we don’t need all of the features that Django provides. As a result, a more lightweight framework (Flask) will be faster, and since it can do all the things I need it to do, it made more sense to me to move on with Flask. Also, Flask is python-based, which is my most comfortable language.

As for the database, I decided to move forward with MySQL with the help of SQLAlchemy. This is because I have experience with MySQL, it’s an ORM structure, and it’s not lightweight. Essentially SQLAlchemy is a library that allows users to write python code that will translate to SQL. Having SQLAlchemy will improve my implementation speed and process since I am much more comfortable with Python than SQL.

In terms of the Object Detection Algorithms, we have seen many. We decided that instead of trying to decide which one to stick with for the semester, we are going to try implementing a few of the ones we have researched. Then, we will evaluate the performances of each and choose the best one.

Lastly, I spent time during the tail-end of the weekend (Friday and Saturday) to create an overall design diagram and tried to be as specific as possible. This diagram encompasses our entire project and also includes submodules for modules that cannot be explained with just one block.

Overall, I spent roughly ~10 hours this week on Capstone.

My progress I believe is behind. I wanted to create the base web application and test out the RPI and ARDUCAM. I did not do either this week and I intend to work much harder next week to compensate so that I can get back on track and/or be ahead of schedule. If I am not on track (standard is based on our defined schedule from the previous presentation) by this time next week, I will ask my TA/Professor for assistance on how to catch up and inform them that I am behind.

In the coming week, I plan to achieve the following: Watch a tutorial video on Flask/SQLAlchemy, create the base web application, start designing the Front-End UI, test out the RPI/ARDUCAM (not just taking pictures because that was supposed to be this week, but also test the transfer of data to a local machine), and work on the design presentation.

Team Status Report for February 12, 2022

An overview of what we did:

  • Finished and presented our project proposal
  • Went over feedback such as reconsidering which algorithm / process to use to track people and took notes on it
  • Viewed other’s research proposals and gave feedback when necessary, we also noted some tools and technologies we could consider for own project
  • We continued to research independently for tools on our project
  • We will meet to go over our research items and discuss next steps in the coming week

 

In terms of what we will do this upcoming week, we hope to do the following:

  • Decide which web framework we will use
  • Decide which database we will use
  • Decide whether processing will be done in the cloud or a local machine
    • Decide which cloud platform we will use
  • Decide which algorithm we will move forward with for object detection
  • Decide how we will detect when there has been motion in the aisle and which algorithm to use for people detection
  • Assign specific tasks and deadlines for preliminary implementation moving forward

 

The most significant risks that we know of thus far are:

  • Using algorithms that demand too much computing power or take too long for the systems we run them on
    • Mitigation: comparing processing on cloud vs local machine
  • Accurate trigger mechanism for when to run object detection
    • Mitigation: Researching fast motion detection processes i.e. background subtraction
  • Poor understanding => poor implementation => non functioning product
    • Mitigation: through research stage, and asking questions on slack to probe the experienced minds of the professors and TA’s

 

There were no changes made to the existing design or schedule

Allen’s Status Report for February 12, 2022

For this week, I worked mainly on the tail ends (last Saturday/Sunday) and today (current Saturday). On the front end of the week, I worked on researching SIFT (i.e. how it works and it’s nominal speed/accuracy rates), how an RPI camera (i.e. ARDUCAM) would be controlled with an RPI, how an RPI would transfer data to a web server, and an ideal web framework to use for the web app (the section that I am heading up). This took roughly ~5-6 hours of work and including class time with our presentations and peer reviews, that’s an additional ~4 hours. Today, I was looking over how an RPI would interact with the Django framework (web framework that I am most comfortable with) and refresh myself on how the Django framework works. This took roughly ~3 hours of my day.

I think our progress is just right on track. My team and I should get into the habit of spacing out work throughout the week, which ensures that we clock in hours consistently. This also helps with progress moving along consistently. We have met up roughly 2-3 times each week since school started for around ~2-3 hours each, so our project abstract/proposal were a bit more fleshed out than we probably needed, so we were off to a good start. However, this week definitely could have been more productive, which is why I say that we are just right on track.

In the upcoming week (by Wednesday), I want to firmly decide on what web framework I am going to use for the web application. I am already heavily leaning towards Django, but I want to explore other options as well as different databases too. Additionally, by the end of next week, I want to have setup the backend and frontend (i.e. create the project and begin with initial UI). Lastly, our parts for our system have been picked up recently this week and we want to start testing that the RPI can control the ARDUCAM to take pictures and try to send that information to the backend.