Nanxi’s Status Report for 04/24/2021

This week I am done with the building and testing of the LED grid. It is working as expected. I might make some final adjustment to the sizing of the grids before taping it, but it is mostly done. The adapter we got for LED is not so stable, so I prefer not to light it up for too long at a time so that the LED strip doesn’t get burned like the last time. This also fits the usage model better. (Users will pick up the food from the grids within a minute.)

I have been testing the image recognition part regarding the placement of each object. This part of the algorithm decides which LED grid each item belongs to. I found out that the accuracy rate of this when using google Vision API is highly dependent on the placement of the camera and the lighting. I did some testing on where to mount the camera to get the best result. Here are the results below. I decide to mount the camera at 18 inch above the surface. We will be using this data for a part of the testing plan as well. More testing about the accuracy of this will be added to the testing plan later.

Another problem is that lighting does affect the image recognition accuracy. We will put a light source (a floor lamp) right above the objects to imitate the lighting in the fridge. In this case, the accuracy rate of image recognition should be stable and it will be better for the testing.

I plan on doing the integration test next week and we should be able to have our final product complete.

Yang’s Project Report for 04/24/2021

This week, I looked at collecting data for our presentation next week. Primarily focused on how our system works and interacts with users. To do this, we are doing surveys to gather audio data and testing how our system works with that input.  This will be very important for our presentation on testing and validation.  Overall, we are on track and will only need to make some minor changes to our code + validation to have a ready presentation. 

Team’s Status Report for 04/24/2021

Currently, we are working on the integration of the entire project as a team and refining each part/subsystem separately. The most significant risk that could jeopardize the success of the project is the migration of code from PC to the Jetson nano board. Also, since we are working separately and cannot meet for the integrating work, we need some methods to connect things together.

In the past weeks, we experienced problems with importing some Machine Learning libraries on Jetson Nano, and after communications with other teams and research on the internet, we decided to switch to Google Speech API. In addition, we are trying to control the Jetson Nano through SSH remotely so that we can work on the same piece. We also used Github to control the workflow and keep track of the progress. Therefore, we are able to manage these risks through these ways.

In general, there are no changes to the design of the system, except that we mentioned above about the probable change of Speech API. The usage of speech API and user interface will not be modified due to this change.

Our team is going according to the plan.

Elena’s Status Report for 04/24/2021

This week I continued working on adding more recipes into our recommendation system. Similar to the previous works, I am searching recipes online and process them by tagging and cleaning data into the designated format.

Another part of this week’s work is designing the test data set for the recommendation part. In general, I had made two ways to test if the recommendation system is working correctly; the first part is a set of requests list as the required format, and this set of requests will be used to manually check if the recommended recipes are correct(only uses the ingredients detected, and satisfy special requirements). Another part of testing will be offering some invalid requests, like missing ingredients, malformed requests, etc, and make sure that the system will not crash and can handle the issues.

In the past weeks, we once tested the image recognition API with the dataset we built. However, we didn’t take the accuracy but only observed that the API works on our test set. Therefore, this week I also started rerunning the API on the dataset in order to get the numeric accuracy. In addition, we have a set of photos we took with real ingredients, which will also be applied to test the API in different situations.

Currently, I am on the schedule; the recommendation part only needs the progress in the number of recipes. For next week’s plan, we will be working on the final presentation, and I will put the most effort into the presentation as the presenter.

Elena’s Status Report for 4/10/2021

For this week I first tested the ingredient detection accuracy on our own image datasets. Among about 30 candidate items chosen before, about 12 items were detected reliably by the API. I did some research about the ingredients that were not detected successfully, some of them were probably not detected due to the overlapping of individual pieces, like raw/cooked shrimps; others may require a perfect angle/shape for correct detection, like pineapples and eggplants. In general, we are able to choose 9 out of 12 items, including beef, broccoli, strawberry, banana, Italian sausage, apple, tomato, onion, carrot, octopus, potato, salmon. It is surprising that the API was able to detect beef and salmon, which are usually sold in small pieces/cuts.

As the next step after deciding the items, I started working on the 30 recipes based on these selected items. For each recipe, I have to manually add tags to them. Currently, the tags that we have include vegan, non-dairy, dessert, dinner, lunch. Also, as suggested by the instructor previously, I will collect a list of the pantry items that are used in these recipes since we assume that users will have plenty of supply for these items.

My progress is still a bit behind due to my personal health issues. Thanks to the support from my teammates, they help taking over the image recognition work and that really helps me catch up with the plan on the image side; on the recommendation side, I plan to finish 10 recipes by this weekend, so that we will be able to integrate the entire system next week. Therefore, the plan for next week will be the teamwork on integrating the entire system.

Nanxi’s Status Report for 04/10/2021

This week I finished the program for LED, now with the input from the image recognition software, we can control the LEDs and light up the grids we need. The code has been pushed to GitHub. Unfortunately, I burned our LED strip during the testing process and need to order new ones. The new ones will not arrive before demo, so I prepared a demo with single LEDs. (For simple turn on and offs, the LED strip can controlled in the same way as simple LEDs.)

I also took a look at the image recognition portion. By taking a picture with several food items in it, the software can recognize what items are they and where they are (the API is very precise). This information can help us locate which grids the items are in and determine which ones to light up.

Team Status Report for 04/10/2021

As a team, we looked into integrating the speech recognition with the image recognition to be able to do a basic request and response for the demo. We ran into some issues with the LEDs, since we had to get replacements. It is currently working out well and we expect to be able to demo a fully functional pipeline, albeit without the depth of recipes and ingredients that we expect for the final demo. This should be a good midpoint for us, to have the general framework done, and can simply add in additional functionality/data.

The biggest risk for us seems to be some porting issues, with the microphone on the Jetson being much weaker than the normal desktop microphone, meaning it is harder to trigger for commands. However, tuning some parameters fixed that issue. So we are currently looking at some other issues with images as well, however, we think that this should not be a bottleneck.

There were generally no changes to our deadlines and we expect to finish on time.

Yang’s Status Report for 04/10/2021

This week I focused on putting the speech recognition onto the actual new Jetson nano, as well as working on looking at integrating the image recognition into the entire project for the demo. I also took a deeper look into the image recognition to make sure we can integrate without issues. 

Nanxi’s Status Report for 04/03/2021

I ran into some expected problem this week. Our Jetson Nano stopped working, and we ended up re-flashing it. The second problem is that the available library that is compatible with the LED strips are mostly integrated with RPi’s GPIO library, which makes integrating the LED library on Jetson Nano very hard. I ended up deciding to write our own LED library for Jetson Nano GPIO. This combined with the issue with our Jetson Nano itself, we need more time for the LED grid. This week’s progress is that we have a robust testing scheme for GPIO output, it will be easier to debug the LED algorithm later. If the LED strip does not work out, we will fall back to use individual LEDs to build the grid ourselves.

Single LED for testing

Yang’s status report for 04/03/2021

This week I wrapped up the speech recognition, in triggering from speech, determining words from audio. This means that we can actually take in a request from the user and be able to trigger from that. In the next week, I will be looking into text to speech, which should be easy, as well as working on how to send emails to the user to inform them of the recipe.