Team Status Report for February 10th, 2024

As a team, we had a productive week in which we completed our presentation and all were on schedule in completing the planned tasks. Here is an image of a nutritional label algorithm Steven tested out:

It reads macronutrient values from a .png image that we found online. We are aware of some issues with this algorithm not calculating the calories, and the model is relatively outdated. We are expecting challenges in fine-tuning the code and/or making our own nutritional label reading algorithm. 

Furthermore, Surya was able to work with a potential camera that we will integrate into our design. He also worked alongside Steven to finalize the list of parts from inventory and from the purchase list, including Arduinos that are optimized to work with scale models for the next stage of the design (RS-232). While the group doesn’t plan to work on the stage in the next 2-3 weeks, prioritizing energy and resources to a robust recognition system in the meantime, given all the parts are interconnected/have to work with one another, it is an important design feature to design functionally-independent yet cohesive components that work well with one another in a complex project.

 

In addition to that, Grace was able to build the Django framework for the front-end by initializing all the files. She plans to have a front-end implementation next week to represent the user interface that we plan to connect the database to. The website portion of our project is up to speed with our anticipated schedule.

 

Through our work this week, we explored our options more in-depth and did research on previous projects. We are expecting to see risks in integrating the Arduino to do all the OCR and computational work. Our backup plan involves defaulting to using our computer to do the primary workload. Likewise, the equipment we have and plan to have are compatible with both a computer and/or an Arduino. 

 

We hope to make significant progress in the creation of the physical product (a wooden box with LED lights to illuminate an object). Likewise, we seek to make progress in our individual assignments which involve website design, algorithm development and integration, and interfacing the hardware with the software. 

 

Lastly, thank you to all the students, TAs, and professors for providing commentary on our presentation and overall project. We received great feedback to better guide us in our design. We are now more aware about potential problems and have thought about alternatives in the event of issues. We recognize the value of this course towards our personal and professional development towards becoming engineers who can both solve problems and communicate their solutions effectively.



Steven Zeng’s Status Report for February 10th, 2024

Regarding my progress, I looked through online libraries for OCR (Tesseract) and nutritional label reading algorithms. I looked through them for important functions and how to integrate them into our project. The githubs that I have viewed are the following: https://github.com/openfoodfacts/off-nutrition-table-extractor/blob/master/nutrition_extractor/symspell.py and https://github.com/tesseract-ocr/tesseract. The purpose of doing this is to provide myself with a better understanding of how these libraries work and areas in which I can fine-tune them for our specific project. I found that the nutrition label extractor seems to only take.png images, and I changed the code to support newer versions of numpy and Tesseract. This algorithm does not do as well as we expected, so we might need to modify the algorithm or completely scratch it out.

 

In addition to this, I did research online to create a purchase list. I first found a convenient scale to use. We are looking for a scale that has the ability to connect to a laptop and send weight data to the computer. I will work on this part of the project later when we receive the scale; the scale seems to work with excel sheets, so I would need to understand how to convert an excel sheet into database entries. I also looked for LED strips to illuminate the item. All of this work has been added to a parts list that is shared amongst my team.

 

Currently, my progress is on schedule; however, looking through the library, I expect some issues with integrating various functionalities which might exceed the estimated time. However, I plan to get a better idea next week after I do basic small tests using my macbook camera. 

 

Overall, this week I focused primarily on presentations and organizing work for the following weeks. Next week, I hope to get physical proof that the OCR and nutritional label algorithms are working more effectively with example items and classifications displayed on a console. Next, I will look into image classification to classify fruits and vegetables. Finally, I plan to work with Surya to get access to a makerspace at TechSpark to do wood work.



Feb 10th 2024 Status Report — Surya Chandramouleeswaran

This week, I helped scope out some of the hardware tools needed to interface with our imaging system, which is the first component of interest for our product. I helped develop a plan for our team to have this component of our product finished and fully tested in about 6 weeks.

The first aspect of this involved researching existing cameras that met the frame rate and pixel resolution benchmarks that we needed for our OCR technology. Last week, during our weekly meetings, Prof. Savvides was kind enough to let me borrow a camera for the image recognition pipeline, suggesting that simple desk cameras would suffice for our applications, given the product would be held in a stationary manner. Additionally, Prof. Savvides offered to extend his expertise for making the detection algorithm as efficient as possible; I look to work with him and my teammates to build this processing pipeline.

Additionally, the layout of our apparatus, as well as details of the interconnect, were finalized; as the primary person in charge of hardware in this group, I took the initiative on this front. For computing reasons, and after prolonged discussion with the software-inclined members in our team, I decided that running the detection algorithms on an Arduino would not be feasible. While we certainly will order Ardunios (RS232 is rather cheap given the budget constraints of the class), I envision that offloading the computational processes to an external computer may be the best course of action in the beginning. After speaking with Prof. Savvides and our TA Neha last week, we agreed that it is best to test the performance of such tools before dismissing them.

 

We are on schedule progress-wise. We need to start building the recognition system and integrating that with the camera. I look forward to working with Steven on building some helper functions for our algorithm through Tesseract. Right now, we would like to test classification on our MacBook Laptop Camera, then integrate with the web camera that Professor Savvides provided us.

 

One of the design questions that has arisen is if we should implement an auto-capture feature on the camera (where the photo is captured when the camera deems it in focus). For training purposes, it is best if the image set is rather standardized in nature. This weekend I will brainstorm approaches to ensuring that the image classification algorithm gets quality images with ambient lighting and clarity in reading.

 

In the next week, I look to work with Steven in scaffolding the image recognition system and understanding what is expected from the camera which the system should speak to. I plan to begin testing on our built-in computer cameras, as I expect to work on some form of an autofocus-autocapture feature on the camera to ensure the training process of our classification neural network goes smoothly.

 

Attached please find a copy of our proposed design sketch:

Design Sketch