For this week I first tested the ingredient detection accuracy on our own image datasets. Among about 30 candidate items chosen before, about 12 items were detected reliably by the API. I did some research about the ingredients that were not detected successfully, some of them were probably not detected due to the overlapping of individual pieces, like raw/cooked shrimps; others may require a perfect angle/shape for correct detection, like pineapples and eggplants. In general, we are able to choose 9 out of 12 items, including beef, broccoli, strawberry, banana, Italian sausage, apple, tomato, onion, carrot, octopus, potato, salmon. It is surprising that the API was able to detect beef and salmon, which are usually sold in small pieces/cuts.
As the next step after deciding the items, I started working on the 30 recipes based on these selected items. For each recipe, I have to manually add tags to them. Currently, the tags that we have include vegan, non-dairy, dessert, dinner, lunch. Also, as suggested by the instructor previously, I will collect a list of the pantry items that are used in these recipes since we assume that users will have plenty of supply for these items.
My progress is still a bit behind due to my personal health issues. Thanks to the support from my teammates, they help taking over the image recognition work and that really helps me catch up with the plan on the image side; on the recommendation side, I plan to finish 10 recipes by this weekend, so that we will be able to integrate the entire system next week. Therefore, the plan for next week will be the teamwork on integrating the entire system.