Team’s Status Report 05/08/2021

This week our team finished the setup of the system in the mini-fridge and tested the integrated system. The most significant risk that could jeopardize the success of the project is how to showcase the entire functionality in the demo video. Since the live demo is only one part of the entire demo video, and our project plays the role in the daily routine of the user, we need to find a proper expression that demonstrates our product in few minutes. This risk is managed by having good communication in our team meeting to design a plan with specific steps and content we want to add to the video. We will add annotations to the live demo part so that we can make sure each step is explained to the audience.

Since it is already the last week of class, and we finished the implementation part of our project, we are not making changes to the existing design, and we don’t change the schedule as the things left are to work on the report, demo, and poster.

An image of the mini-fridge is attached.

 

 

Elena’s Status Report 05/08/2021

This week I worked on the final presentation and preparation for the demo video. To be specific, I worked on the slides that we will be using to showcase our project in the demo video.

The other thing that I worked on is the finalizing of the recommendation part’s recipe dataset. For the last two weeks, I was more focused on the testing part, so that I did not finish the setup for the 30 recipes that we planned. Therefore, this week I added the last few recipes to our dataset. Now, we got more variety in recipes and types of cuisine.

The things that I am working on through the weekend with my teammates are filming and editing the video. We already discussed what sections we would like to have in the video, and we are working separately on each section at this moment. We will also be working on the posters together.

We are on our schedule and we pushed to the end of the semester. Next week I will be working on the final report with my teammates.

Team’s Status Report 05/01/2021

The most significant risk that could jeopardize the success of the project is the integration of the system in the mini-fridge. At the beginning of the semester and for the most time of the semester, we are using the clear acrylic board as an imitation of one shelf in the fridge. But we noticed that it might be better to just set up our system in the mini-fridge for a good demo result. Therefore, our team is currently focusing on building up the system in the mini-fridge. Since we set up the camera separately to take the picture of the grid, we have to figure out how to do this in the fridge. The risks are managed by mounting/ taping the camera on the wall of the fridge; we might need to adjust for the best angle and height, but we will try to fix the camera at one point for the best image results.

There are no changes made to the existing design of the system. Since we are coming to the end of the semester, we believe that sticking to our current design and implementation is the best idea.

There are no significant changes to the current schedule. We updated the demo and report timeline according to the course schedule.

 

Elena’s Status Report for 05/01/2021

This week I worked on tests on the ingredient recognition for image recognition part, and the tests for the recommendation system. For the recommendation system, I built the test queries, tested and collected the accuracy, and add error handlers to the functions. For the ingredient recognition, we had two functions before, one of them is using the “label detection” attribute from API response, another is the “object localization” attribute. We used the second one to get the information of ingredients’ location and calculate the corresponding grid index. During the image tests, I noticed that the object localization response can recognize all ingredients with the correct amount, but sometimes can only label the ingredient as “food”. In order to get a more accurate name of ingredients for our project, I am trying to combine the information we can get from both response attributes to raise the accuracy. I am looking forward to finishing the improvement by this weekend for the presentation.

Another part I worked together with my teammates is the presentation slides. As the presenter, i updated our schedule chart, and participated in the creation of the slides, including parts other than the testing.

I think we are on the track. This week we are integrating our system, and after completing the testing we are generally finishing up the project. Next week I will be finalizing the database for recipes and support my teammates with the integration testing and general improvement.

Team’s Status Report for 04/24/2021

Currently, we are working on the integration of the entire project as a team and refining each part/subsystem separately. The most significant risk that could jeopardize the success of the project is the migration of code from PC to the Jetson nano board. Also, since we are working separately and cannot meet for the integrating work, we need some methods to connect things together.

In the past weeks, we experienced problems with importing some Machine Learning libraries on Jetson Nano, and after communications with other teams and research on the internet, we decided to switch to Google Speech API. In addition, we are trying to control the Jetson Nano through SSH remotely so that we can work on the same piece. We also used Github to control the workflow and keep track of the progress. Therefore, we are able to manage these risks through these ways.

In general, there are no changes to the design of the system, except that we mentioned above about the probable change of Speech API. The usage of speech API and user interface will not be modified due to this change.

Our team is going according to the plan.

Elena’s Status Report for 04/24/2021

This week I continued working on adding more recipes into our recommendation system. Similar to the previous works, I am searching recipes online and process them by tagging and cleaning data into the designated format.

Another part of this week’s work is designing the test data set for the recommendation part. In general, I had made two ways to test if the recommendation system is working correctly; the first part is a set of requests list as the required format, and this set of requests will be used to manually check if the recommended recipes are correct(only uses the ingredients detected, and satisfy special requirements). Another part of testing will be offering some invalid requests, like missing ingredients, malformed requests, etc, and make sure that the system will not crash and can handle the issues.

In the past weeks, we once tested the image recognition API with the dataset we built. However, we didn’t take the accuracy but only observed that the API works on our test set. Therefore, this week I also started rerunning the API on the dataset in order to get the numeric accuracy. In addition, we have a set of photos we took with real ingredients, which will also be applied to test the API in different situations.

Currently, I am on the schedule; the recommendation part only needs the progress in the number of recipes. For next week’s plan, we will be working on the final presentation, and I will put the most effort into the presentation as the presenter.

Elena’s Status Report for 4/10/2021

For this week I first tested the ingredient detection accuracy on our own image datasets. Among about 30 candidate items chosen before, about 12 items were detected reliably by the API. I did some research about the ingredients that were not detected successfully, some of them were probably not detected due to the overlapping of individual pieces, like raw/cooked shrimps; others may require a perfect angle/shape for correct detection, like pineapples and eggplants. In general, we are able to choose 9 out of 12 items, including beef, broccoli, strawberry, banana, Italian sausage, apple, tomato, onion, carrot, octopus, potato, salmon. It is surprising that the API was able to detect beef and salmon, which are usually sold in small pieces/cuts.

As the next step after deciding the items, I started working on the 30 recipes based on these selected items. For each recipe, I have to manually add tags to them. Currently, the tags that we have include vegan, non-dairy, dessert, dinner, lunch. Also, as suggested by the instructor previously, I will collect a list of the pantry items that are used in these recipes since we assume that users will have plenty of supply for these items.

My progress is still a bit behind due to my personal health issues. Thanks to the support from my teammates, they help taking over the image recognition work and that really helps me catch up with the plan on the image side; on the recommendation side, I plan to finish 10 recipes by this weekend, so that we will be able to integrate the entire system next week. Therefore, the plan for next week will be the teamwork on integrating the entire system.

Elena’s Status Report for 04/03/2021

This week I planned to work on the connections between sub-systems and figure out sending recipes through emails. Unfortunately, I had a fever during the week, and due to the sudden illness, I did not finish everything that I had in my plan.

First, I implemented and eventually tested out the emailing functionality. I am using SMTP in Python to send emails through my Gmail account. My only concern is that both the email address and the password are required for the sender to send the email, which might cause security issues to the sender. Currently, I am putting this information in config.ini and not sharing it when pushing my codes to GitHub/public repository. A screenshot of a successful email sent through SMTP is attached.

For the second part, regarding the connections between sub-systems, I did some research about the projects/previous works including speech and even conversation functionality. For example, I found https://developer.nvidia.com/conversational-ai helped me understand the general pipeline of conversation AI, and https://toptechboy.com/ai-on-the-jetson-nano-lesson-60-make-your-nano-talk-with-text-to-speech/ as a video lesson for better self-practice.

Because of personal health issues, even though I plan to run my own code on Jetson Nano to test a general pipeline, I did not finish it this week. This will make me a bit behind the schedule. In order to catch up with the schedule, I planned to make good use of the time next week, and the week after(spring carnival). I will set a stricter schedule and take care of health issue at the same time so that the happenings will not repeat. For the next week’s deliverables, I plan to add more functionalities to my recommendation part(timestamp), and set up the data processing of the image recognition part.

Team Status Report for 03/27/2021

We received advice from instructors that we need to test how well the Image Recognition API performs on recognizing our ingredients. We agree that it is important to consider this because this part is very important for the entire pipeline to work. We are currently working on the testing part, and if the performance is not ideal, we will consider train the pre-trained model with our dataset or implement a CNN model that focuses on the dataset we are using to manage the risks of this part.

Another suggestion we got is that we should also consider sending the text version of the recipe to the user through email so that the users can easily check the recipe if they want to. We bought wifi adapters and received them yesterday, and we are working on the email functionality now. We are still trying to figure out how to get the wifi adapter to work on Jetson nano.

The next step of our project is to connect every part together. We will research the pipeline and start implementing the connections between sub-systems in the coming week and also continue explorations on our own parts individually. There are no changes to our planned schedule.

Elena’s Status Report for 03/27/2021

This week I focused on my work for the recommendation part as well as supporting tests for image recognition. I collected a set of images of the candidate items posted in the previous status report, and my teammate will use the image data set to test the performance of Google Vision on these candidate items.  Some example images are posted below.

For the recommendation part, I built two functionalities. The first function will accept an input including two keys, tags and ingredients. It will output a list of recipe names for users to select from. Some sample input and output is as follows:

Input:

test_request = {‘tag’: [‘non-dairy’];’ingredient’: [‘apple’, ‘orange’,’tuna’]}

Output:

[Apple and orange jam’]

If no recipe can be matched to the request, then it will return the following output:

[‘No results found’]

All samples above are the real inputs and outputs of the system.

Also, after the user selects the recipe they want to use, the next function will accept input of recipe index, and return a list of instructions. A sample output will be the following:

[‘Cut 2 apples and 2 oranges into small chunks’, ‘boil 200ml of water in a pot’,
‘add apples, oranges and 50g of sugar into the pot, and stew with medium heat’, ‘when the content becomes thick, turn off the heat and add 2 table spoons of lemon juice’]

The progress is on schedule. I will use the performance result from Google Vision to decide the 9 items and 30 recipes next week and generate more test cases to test on the recipe recommendations. Also, I am currently working on the email sending recipes to user functionality. There are some issues with the authentication, but I will try to figure it out next week.