Oliver’s Status Report – 26 Mar 2022

Indeed, as I’ve talked about in the team status report, we are now entering the phase of the project where the fun truly starts, and the product begins to take shape. This week, I kicked off the integration process between the front-end and back-end, ironing out kinks such as CORS header mismatches and API design issues, paving the road for full integration ahead. Right now, the front-end is now able to display real expiration dates fetched from the API.The back-end API had quite a bit of work done to it as well. Logic has been added for when items are replaced into the fridge, ensuring that the item count and expiry dates remain correct even when the same item is removed and placed back into the fridge. This is done by storing the state of the fridge – new items are only added to the tally when it is in “add” mode, i.e. when the user is loading new groceries from the store. All other additions in the “track” mode are considered replacements of previous removals.

We’re still healthy in terms of schedule, but with just a bit over a month left, time can get a little tight if we are not careful. Going forward, we will be integrating the back-end with the Jetson and CV system, and I will also continue working with Alex to bring other aspects of the front-end into fruition, such as editing previous transactions.

Samuel’s Status Report – 26 Mar 22

During last week’s status report, I mentioned how we needed to find a dataset which exposed the network to a wider variety of fruit.

Samuel’s Status Report – 19 Mar 22

This week, was a relatively productive one: I managed to train the network on the “silvertray” dataset (https://www.kaggle.com/datasets/chrisfilo/fruit-recognition) which produced relatively robust results on test data that the network had never seen before (a green label indicates accurate detection; here we had 100% accuracy on test data).

Of course, the test data also involved the same silver trays that the algorithm trained on, so a high accuracy is expected.

I then moved on to making it perform detection using real-world data, on our C++ application with our webcam, and these are results!

As visible in the images above, the NN is able to detect the fruits accurately on in a real-world situation (including a noisy non-white background WITHOUT segmentation applied). That being said, there are some inaccuracies/misdetections such as with the orange above despite the frames being very similar. I describe needed improvements later below.

With this, we are currently on track towards a working prototype, although we could probably train the network to handle more classes with self-collected or web-procured data.

Next week, we will begin integration of the various components together, and I will work on several improvements to the current CV algorithm/setup:

  1. Include a “change detection” algorithm that will detect when significant changes appear; this will allow us to tell if a fruit is needing to be scanned.
  2. Normalization of the image before processing; this will help reduce issues with random lighting changes, but might require that the network be retrained
  3. Build the actual rig with a white background and test the algorithm on that
  4. If necessary, change to using a silver tray or silver-colored background similar to the network’s training set, and/or collect our own datasets.

Team Status Report – 26 Mar 22

This week was a relatively productive week; we were able to get our individual parts working, and are quickly moving on towards integrating our various components into a working prototype:

    • Samuel: Successfully got a model working with ability for our C++ program to detect and classify fruits relatively accurately with real-world conditions (real fruit, webcam etc.)
       
    • Alex: I was working with Oliver on integrating the front end and back end during the week.
    • Oliver: The real fun has finally began – and we’re now watching our product come to life. I began the integration process between the front-end and the back-end, ironing out the kinks and paving the way for future integration ahead. I also continued work on the back-end API, adding logic for when items are replaced to the fridge. This ensures that the item count and expiry dates remain correct, even when the same item is removed and placed back into the fridge.

We are currently on track in our timeline, but need to speed up for our integration components, as we suspect that this will be the part that will cause the most problems. Next week, besides working on integration, we will also be continuing to iron out issues in the individual components; in particular:

  • Samuel: Will work on background-change detection to see when fruits are coming in; attempt various preprocessing techniques to make network more robust.
  • Alex: Will finish integrating the front end with the backend API
  • Oliver: Coming up, another key integration task is to ensure that the Jetson and CV system is integrated with the API. I will also have to continue work with Alex to bring other aspects of the front-end into fruition, such as editing previous transactions.

Alex’s Status Report – 26 Mar 22

This week, we did an ethics study on our product and learned about how to make more ethical products.

I also worked on integrating the front end with the back end API. I’m a little behind schedule with this, but this was because I didn’t have the API earlier so I just need to work through this quickly. Unfortunately the API is still incomplete, so I will just have to work through what I can now. This week, we got the first endpoint up to tell us when food is going to expire and adds it to the calendar.

19 Mar 2022 – Team Status Report

We mostly worked individually on our own responsibilities for the project this week:

  • Samuel: Fixed issues with neural network training, completed training on Fruits360 dataset with ResNet50 architecture. Currently working on finding better datasets to train on.
  • Oliver: Brought API online onto a live server connected to the Internet, and continued to implement API endpoints needed as part of the plan. We now have enough API endpoints to begin the integration process meaningfully
  • Alex: Worked with Samuel on researching new datasets and potential fixes for the dataset, as well as running the neural network training and fixing server issues.

 

For next week, we will also be continuing to work on our individual parts but will begin integration through our APIs, especially with the front end.

  • Samuel: Will work on finding (or creating our own) and training network on new datasets. Will work on background segmentation (as needed for new datasets or white background).
  • Oliver: Will work on integrating the back-end with the front-end created by Alex, so that Alex can be unblocked on that front and can continue work on the front-end. Will also continue with remaining back-end API endpoints
  • Alex: Working with Oliver on integrating the front-end and back-end

Oliver’s Status Report – 19 Mar 2022

I did not make as much progress this week as I had planned for due to an illness from Wednesday (Mar 16) till Saturday (Mar 19). However, despite that, I did manage to get some work done and achieve the progress we needed this week to keep us from falling behind.

Specifically, I brought the back-end API online on a server connected to the Internet, allowing for integration process with the front-end and the Jetson to begin. For example, calls to the API can now be made as follows:

$ curl -u "<secret key redacted>":"" -d "" https://fresheyes.oliverli.dev/item/2/
{"id":2,"name":"Apple","shelfLife":7,"unit":"piece","itemExpiry":[{"expiry":1648050152638,"quantity":5},{"expiry":1648351218448,"quantity":2}]}

This fetches all details for items of ID 2, which in our example here consist of apples. This data, including the expiration dates, are all returned in JSON format which can be easily parsed by a JavaScript front-end and presented to the user visually.

I also implemented enough remaining API endpoints to enable us to begin the integration process meaningfully. Specifically, there are now API endpoints analogous to CRUD – create, read, update, and delete, the 4 essential operations on the various resources. Although the back-end is still incomplete, this means that my focus from now on is the key application logic instead of lengthy boilerplate, which should progress substantially faster. I will also be working on the front-end integration together with Alex, so as to speedily unblock his work and enable him to continue with the front-end.

Besides the code, I also worked on the ethics assignment this week. Thinking about the ethical implications of the project was a new and interesting take, since I did not initially think this project could raise any ethical concerns at all. However, thinking critically, there were indeed aspects of personal data collection that can become problematic, making the ethics assignment an enlightening endeavor all things considered.

Alex’s Status Report – 19 Mar 22

This week, I stalled working on the front end while waiting for the back end to finish and get deployed. In the meantime, I was able to start working a little bit on recipe recommendation, for which I think I will just compile a few links to different recipes based on the final fruits we end up choosing. I also worked on the ethics assignment.

I also spent time training on the old neural network, until we realized that the dataset we were using was not well-suited to our needs.

Lastly, I spent some time dealing with server instability issues, and believe I narrowed down a long term problem I had to SSD failure. I ordered a new SSD off of Amazon which should arrive next week so we can get the front end back up and continue training the neural network.

Although I fell behind on a lot of the tasks I needed to do for front-end, I feel like the end goal is still within reach and I am overcoming necessary hurdles to get there.

Samuel’s Status Report – 19 Mar 22

This week, I focused heavily on getting the neural network to work properly. In the beginning of the week, I successfully trained the neural network on the new ResNet18 architecture (as opposed on the old one that did not work). After I realized that it didn’t work as well as expected on real data, I swapped to a more advanced ResNet50 architecture, but that did not seem to help it either.

It was then that I began to suspect something else was wrong besides the the network itself, because the networks kept reporting a 90+% validation accuracy, but whenever I tested the code, even on training images. This hinted at a problem with my testing code/script. Eventually, I realized that during the network training process, we were passing in normalized images, and the network was training on that; once I changed my test/evaluation script to feed normalized images into the network, and everything worked very well!

However, as I began testing the network on various images, we realized that the network was not very robust on external data:

After scrutinizing the dataset, we realized that the dataset was not good enough, and was subject to some major flaws that made it susceptible to overfitting. Firstly, it was a 360 degree shot of a single fruit per category, so even though there were many images of fruit, the network was fed only one example of something from that fruit category, thus making it hard for the network to generalize based on colour, shape etc.

To resolve this problem, I would need to search for more datasets, parse them, and train our network on them. This will be my focus for next week. Currently, I have found several datasets; however, they each have their own issues. The most promising one I have found so far is very similar to our use-case, with images of fruits taken from a top-down view, but has a reflective silver tray background which is very hard to segment away. Some pictures also have groups of fruit:

I will first try training the network on center-cropped and resized images and if that does not work, I will try algorithms like Otsu thresholding on saturation value, or GrabCut, to segment away the background.

Team Status Report – 5 Mar 2022

This week, we primarily focused on our design review report, which was more work than expected. Thankfully, the deadline was extended, and most of the content was already thought through or covered in the presentation; we just needed to spend time writing in out. In particular, we decided to make our block diagram a lot more detailed:

Old Block Diagram

Our old block diagram used in the slides was primarily meant as a summary for visual purposes, and therefore lacked the detail needed for the report.

New Block Diagram

Our new block diagram is a lot more detailed, with specifics regarding algorithms, APIs and data transfer; however, this would have been too confusing for a presentation.

 

Most of our time this week was spent on the design report, and not much was done on the implementation side. However, we are still quite comfortably ahead of schedule since we began implementation early. With regards to the design report, we split up the roles equally, with each team member taking care of the architecture and implementation components related to their specialization:

  • Samuel:
    • Architecture, Implementation, Testing (CV + Attachment System)
    • Introduction, Use-Case
    • Trade studies (CV-related)
    • Related work
  • Alex:
    • Architecture, Implementation (Front-end/UI)
    • Use-Case, Design requirements
    • Trade Studies (misc)
    • Risk-mitigation
  • Oliver:
    • Architecture, Implementation, Testing (Databases/Back-End, APIs,)
    • Project management (Schedule, Responsibilities, Materials)
    • Risk-mitigation
    • Summary

Samuel’s Status Report – 5 Mar 2022

This week, we spent most of our time writing up our design review report. Although a lot of the main content had been covered in our design review presentation, the devil was in the details for this report. In particular, we needed to make our block diagrams a lot more detailed since the ones we used in the slides were merely summaries.

We were very thankful that the deadline for the report got extended as that reduced the amount of stress we had, and allowed us to write a more polished report.

On the implementation side, I am slightly “behind schedule” in the sense that I was not able to get as much work done on the new neural network implementation as I had hoped to, because I was focusing on the report instead. However, we are still ahead of schedule since we already have an implementation going.

Next week, I will focus on implementing and training the ResNet18 network, and then testing the accuracy on a self-collected dataset of various fruits.