William’s Status Report for Feb 23 2025

This week, I made some incremental progress on the mobile application in React Native. I focused primarily on refining the UI and ensuring a smoother user experience, making small adjustments to navigation and layout based on initial feedback. While I didn’t add many new features, I worked on cleaning up the existing code and fixing minor bugs to enhance overall stability.

I also started doing some preliminary research on integrating computer vision for object recognition but haven’t made significant progress yet. I explored a few libraries and APIs to get a better sense of what’s available and suitable for our needs but haven’t begun actual implementation.

For the upcoming week, I plan to continue refining the UI/UX gradually, look deeper into potential computer vision solutions, and make some progress on backend improvements, focusing on efficient data handling and cloud storage options. The project is moving forward at a manageable pace, and I aim to ramp up development gradually.

Steven’s Status Report for Feb 23 2025

For this week, I focused on expanding our existing dataset of annotated fridge images. I identified several datasets from online and made use of them to train and update our existing YOLOv5 model. Furthermore, I experimented with data augmentation techniques(i.e rotations/ occlusions) to improve the robustness of our model. Furthermore, I spent time conducting research on the different YOLOv5 models and their expected accuracy/latency for our design report, in order to determine which model will be optimized for our use.

In terms of progress, I am currently on schedule and have completed the development of the training pipeline in PyTorch, and am working on training the model with our datasets.

For next week, I will explore training with the YOLOv5x model with further hyperparameter tuning, with the aim of increasing our detection accuracy to beyond 90%.  I will also compare inference timings with and explore model quantization for Raspberry Pi optimizations, in order to identify the model which best meets our requirements.

Team Status Report for February 23 2025

1. Overview

Our project remains on track as we make significant progress across hardware, CV, and mobile application refinement. Our efforts were focused on expanding the dataset, optimizing our model, finalizing the design report, as well as improving the mobile app’s UI and backend integration. Though some tasks, such as the camera data transmission pipeline, are still in progress, the project remains on schedule. Next week, we will focus on fine-tuning our model, optimizing inference, and implementing key hardware and software components to seamlessly integrate Fridge Genie’s features.


2. Key Achievements

Hardware and Embedded Systems
  • Formally documented use cases, requirements and numerical specifications for our camera system.
  • Derived minimum field-of-view calculations to ensure full fridge coverage
Computer Vision
  • Collected and integrated new annotated fridge datasets to improve model performance
  • Applied data augmentation techniques to enhance robustness of model
  • Research and analyzed different YOLOv5 models to determine which model best meets our requirements
Mobile App Development
  • Improved navigation and layout for smoother user experience
  • Cleaned up existing codebase and resolved some minor bugs for enhanced stability
  • Explored libraries and APIs for integrating computer vision into the mobile app

3. Next Steps

Hardware and Embedded Systems
  • Complete data transmission pipeline between camera and Raspberry Pi
  • Begin motorized slider construction for improved scanning if hardware arrives
Computer Vision
  • Train and test YOLOv5x model with hyperparameter tuning to reach >90% detection accuracy
  • Explore model quantization and optimizations for Raspberry Pi to reduce inference time
  • Finalize model comparisons and select optimal YOLOv5 model
Mobile App Development
  • Continue backend optimizations for inventory management and data synchronization
  • Begin CV integration to the app
  • Backend development to optimize data storage and retrieval efficiency

4. Outlook

Our team is making good progress, with advancements in CV model training, hardware design and mobile app development. Our key challenges will include minimizing inference latency and finalizing hardware integration. For the next week, we will focus on fine-tuning our ML model, optimizing our inference pipeline and improving backend connectivity for data transfer between the mobile app and our model.

Part A: Global Factors (Will)

Our project addresses the global problem of food waste, which is estimated to cost the global economy $1 trillion per year. By implementing automated inventory tracking as well as expiration date alerts, our solution helps households reduce waste, which leads to more financial savings and greater food security. This extends beyond developed nations, as the system can be scaled for deployment in less-developed regions where food preservation is critical. Furthermore, the project provides global accessibility through its mobile-first design, which enables users in different countries to easily integrate it into their grocery management habits. Future iterations of our project could support multiple languages and localization to adapt to different markets. Last, our project directly supports environmental sustainability by reducing food waste, which accounts for around 10% of global greenhouse gas emissions.

Part B: Cultural Factors (Steven)

When developing our detection model and recipe recommendation, we took into account regional dietary habits and cultural food preferences. Different cultures have various staple foods, packaging and consumption patterns, thus the model must recognize diverse food types. For instance, a refrigerator in an East Asian household might contain more fermented foods such as kimchi/tofu, while a Western household might have more dairy products and processed foods.

While our initial product will be focused on American groceries and dietary habits, for future iterations, we will aim to support culturally relevant recipes. Users will be able to receive cooking suggestions that aligns with their dietary traditions and preferences. The user interface will also be designed to accommodate for individuals who are less technologically literate, enabling accessibility across different demographics.

Part C: ENVIRONMENTAL Considerations (Jun Wei)

Our project directly supports environmental sustainability by reducing food waste, which accounts for around 10% of global greenhouse gas emissions. By providing users with real-time grocery tracking and expiration notifications, we help reduce unnecessary grocery purchases and food disposal.

Furthermore, in terms of our hardware, we selected low-power consumption devices such as the Raspberry Pi Zero, which minimizes the system’s carbon footprint. Unlike traditional high-energy smart fridges, we offer an energy-efficient, cost-effective alternative that extends the lift of existing fridges instead of requiring consumers to purchase expensive IoT appliances.

For the long term, we could consider modifying our design to enable it to be retrofittable to most fridges that consumers currently have. This would make our solution more accessible and help reduce waste at a scaled-up level, in addition to preventing consumers from having to replace their existing fridges (which would in turn, have an added toll on greenhouse emissions). Working with industry stakeholders would also help expand the reach of our solution, benefiting not only individual consumers but also grocery stores, food banks, and restaurants.

Jun Wei’s Status Report for Feb 23 2025

1. Personal accomplishments for the week

1.1 Design report

For this week, most of my efforts were concentrated on producing the design report that was due on Feb 28. The use case and its associated requirements were formally presented in the form of an IEEE-style article. Doing the report also allowed me to formally derive numerical specifications relating to the camera system, specifically the minimum FOV required for our use case.

2. Progress status

I am currently on schedule and have completed the tasks set out for the week, apart from developing the data transmission pipeline from the camera to the RPi.

3. Goals for the upcoming week

  • Developing camera data transmission pipeline and lighting control
  • Testing image stitching (if the camera new arrives in time) / Begin construction of motorized camera slider (if the slider arrives in time)

Team Status Report for February 16, 2025

1. Overview

Our project continues to remain on track, as we make progress in hardware development, CV modeling and system integration. For this week, we made key advancements in model prototyping, hardware selection, and data acquisition. Our prototype object detection model was trained and tested with real-world fridge images, and we continued on finalizing hardware components, including camera configurations and Raspberry Pi integration. For our next steps we will focus on expanding datasets, optimizing the inference performance and integrating the detection pipeline into the full system.


2. Key Achievements

Hardware and Embedded Systems
  • Completed a top-level design framework for the project, ensuring seamless integration between hardware, computer vision and mobile components.
  • Ordered two additional Raspberry Pi boards to allow parallel processing and improved scalability
  • Tested the IMX219-160 camera with Raspberry Pi 4 Model B, confirming successful image capture
  • Identified issues with image stitching due to excessive FOV distortion, ordered IMX219-77 cameras for improved results
  • Finalized list of components required for prototype, including camera sliders for motorized scanning
Computer Vision
  • Trained initial YOLOv5 model on preliminary dataset of grocery items
  • Conducted real-world testing by capturing images from fridge and running inference on them
  • Successful detection of grocery item within fridge
Mobile App Development
  • Continued refining the React Native mobile application, improving UI elements and core inventory tracking features.
  • Researched potential libraries and APIs for computer vision integration, laying the groundwork for object recognition features.
  • Improved app navigation and interaction between the mobile app and backend services.

3. Next Steps

Hardware and Embedded Systems
  • Order camera slider mechanism to enable motorized scanning
  • Procure IMX219-77 cameras to replace previous model
  • Set up real-time data transfer between camera and Raspberry Pi
  • Evaluate multi-camera configurations for full fridge scanning
Computer Vision
  • Collect additional dataset of grocery images/ annotate training samples
  • Continue with model training to improve performance
  • Adjust model parameters/ experiment with other models to improve detection accuracy
Mobile App Development
  • Further refine the UI and improve responsiveness for a better user experience.
  • Optimize data retrieval and improve real-time synchronization between the app and backend services.
  • Begin initial work on integrating computer vision features into the app.
  • Conduct additional performance testing and address any remaining compatibility issues.

4. Outlook

Our team is making good progress across all aspects. Although we’ve had to acquire new hardware due to unforeseen challenges, we remain on schedule. In the coming weeks, we will shift more towards system integration, improving model accuracy, and optimizing hardware performance to ensure a fully functional prototype by the next milestone.

William Chen’s Status Report for Feb 16 2025

This week, I continued developing our mobile application in React Native, expanding on the initial prototype by integrating additional functionalities and refining the UI for a more polished user experience. I worked on enhancing the app’s core inventory tracking features, improving navigation flow, and ensuring seamless interaction between the front-end interface and backend services.

Additionally, I began exploring computer vision integration by researching suitable libraries and APIs for object recognition, preparing for its incorporation into our system.

For the upcoming week, I plan to further optimize the mobile application’s efficiency, refine UI/UX based on user feedback, and implement the initial phase of computer vision integration. I will also work on improving backend communication to enhance real-time data updates and explore cloud storage options for image processing. Work seems to be going on track, but I think most of the difficult tasks are yet to come.

Jun Wei’s Status Report for Feb 16 2025

1. Personal accomplishments for the week

1.1 System design finalization

The system design was finalized in preparation for the design presentation. As I am in charge of the embedded components of the project, it made sense for me to develop the top-level design for the project.

Top-Level Design as of Feb 16

1.2 Finalizing parts to order

I also spent time requisitioning and ordering additional parts, namely Raspberry Pis (RPis). Specifically, I ordered two additional RPi Zeros after realizing that each RPi only has one Camera Serial Interface port. Thus, in additional to parallel decentralized processing, have one RPi per camera would allow for some scalability with regard to camera and motor control. The RPi Zero was also chosen because of its smaller form factor, allowing it to be placed within the confines of the refrigerator.

1.3 Testing the IMX219-160 (FOV 160°) camera

I also finally received the IMX219-160 camera ordered slightly over a week ago. I have hooked it up to the Raspberry Pi 4 Model B requisitioned from the ECE inventory and been able to capture some images. I have not tried stitching outputs from the camera, however, I believe its extreme FOV will not be ideal for image stitching. As such, I will proceed with ordering more cameras of a lower FOV (~ 80 degrees).

2. Progress status

I am currently on schedule and have completed the tasks set out for the week, apart from developing the data transmission pipeline from the camera to the RPi.

3. Goals for upcoming week

  • Order the camera slider
  • Order the IMX219-77 (FOV 79.3°) camera
  • Developing camera data transmission pipeline and lighting control
  • Testing image stitching (if the camera new arrives in time)

Steven’s Status Report for February 16, 2025

For this week, I focused on creating a prototype of the model we will use for detection and classification of grocery items. To accomplish this, I trained an initial YOLOv5 model on a small preliminary dataset of grocery items. I made use of a diverse dataset, in order to ensure basic object detection and classification functionality.

After training, I did some initial tests, using an image of a cake within the fridge that we are using. I ran the model on the image to evaluate the performance of the prototype model, below are the results.

As can be seen, the preliminary model was already quite successful in identifying the bounding box of the item, though the classification was not completely accurate, it is understandable since it was a niche item(cake from Giant Eagle).

I am currently making good progress with respect to our Gantt chart, and have started training the model slightly ahead of schedule. For following weeks, I aim to continue data collection by finding more datasets of fridge images, as well as exploring training with annotated data of images of groceries taken within our fridge. I also aim to fine-tune the model through adjusting hyperparameters and increasing training size through data augmentation to increase accuracy. I will also experiment with different variations of YOLOv5 models to see if a larger model will yield better accuracy without major latency trade-offs. I will also measure latency in terms of local and cloud run inference, in order to see which one better suits our requirements for cost and latency.

 

Team Status Report for February 9, 2025

1. Overview

Our project remains on track, with progress across hardware, computer vision (CV), and mobile development efforts. This week, significant advancements were made in image processing, model prototyping, and mobile app development. While some hardware acquisition was delayed, progress in algorithm development and system integration ensures that we are still aligned with our overall timeline.


2. Key Achievements

Hardware and Embedded Systems
  • Conducted initial experimentation with image stitching to assess the feasibility of a single-camera solution.
  • Explored OpenCV’s Stitcher_create() function, identifying key constraints such as the need for image overlap and the drawbacks of high-FOV cameras in feature mapping.
  • Gained insights into optimal camera placement, frequency of image capture, and required FOV for effective image processing.
  • Prepared for upcoming camera and data transmission pipeline testing once equipment is acquired.
Computer Vision
  • Set up the PyTorch environment and implemented an initial YOLOv5 prototype.
  • Developed a preprocessing pipeline using OpenCV and conducted preliminary training on small annotated grocery item datasets.
  • Started integrating external datasets from Kaggle and Roboflow to improve model accuracy and robustness.
  • Began benchmarking inference speed on local (Raspberry Pi) and cloud-based setups to measure latency and performance trade-offs.
Mobile App Development
  • Built the foundation of the React Native mobile application, implementing a core feature prototype.
  • Configured project dependencies and set up the UI framework for key functionalities.
  • Explored integration with backend services to support inventory tracking and recommendation features.
  • Prepared for the integration of computer vision capabilities in future iterations.

3. Next Steps

Hardware and Embedded Systems
  • Acquire necessary equipment, including the Raspberry Pi, camera, and related components.
  • Test camera performance within a fridge environment and optimize settings for optimal image capture.
  • Develop and test data transmission pipelines between the camera, Raspberry Pi, and cloud storage (ownCloud).
Computer Vision
  • Complete dataset collection and refine annotation processes for improved training quality.
  • Conduct further testing on local vs. cloud inference to optimize system efficiency.
  • Integrate YOLOv5 model with the mobile application and test its effectiveness in real-world conditions.
Mobile App Development
  • Expand the prototype by implementing additional features and refining UI/UX.
  • Improve app performance and address any identified compatibility issues.
  • Begin integrating computer vision functionalities and explore API options for seamless connectivity.

4. Outlook

The team is making steady progress across all development fronts. While hardware testing was slightly delayed due to equipment requisition timing, the early progress in software-based experimentation ensures we remain aligned with our overall goals. As we move forward, the focus will shift toward system integration, testing, and optimization to create a fully functional prototype in the coming weeks.

Part A: Public Health, Safety, and Welfare (Will)

Our fridge camera system enhances public health by promoting better nutrition and reducing food waste. By providing real-time tracking of fridge contents, users can make more informed decisions about meal planning, ensuring they consume fresh and balanced meals. The system also helps prevent the consumption of expired or spoiled food, reducing the risk of foodborne illnesses. Additionally, by suggesting recipes based on available ingredients, the product encourages healthier eating habits by making meal preparation more convenient and accessible.

In terms of safety, the automated rail system is designed with smooth and secure movements to prevent accidents such as knocking over or damaging fridge items. The device is built with user-friendly controls and safety mechanisms to ensure it does not pose a hazard to household members. The system also enhances welfare by addressing basic food security concerns—by reducing food waste and maximizing ingredient use, households can make their groceries last longer, ultimately benefiting those on tight budgets or with limited access to fresh food.

Part B: Social Considerations (Steven)

Socially, this product addresses the needs of diverse households, including busy professionals, families, and individuals who struggle with meal planning. For families, the ability to track fridge contents remotely ensures better coordination in grocery shopping, preventing unnecessary purchases and reducing food waste. The recipe recommendation feature also helps bring people together by facilitating home-cooked meals, fostering stronger family and social bonds.

For individuals with dietary restrictions or cultural food preferences, the app can be tailored to suggest recipes that align with their specific needs, promoting inclusivity and personalization. Additionally, the product encourages sustainability by helping consumers become more mindful of their consumption habits, aligning with broader social movements that advocate for reducing food waste and promoting environmentally responsible living.

Part C: Economic Considerations (Jun Wei)

From an economic perspective, the fridge camera system helps households save money by reducing food waste and unnecessary grocery purchases. By keeping an up-to-date inventory of fridge contents, users can avoid buying duplicate items and make more efficient use of what they already have. The recipe recommendation feature further enhances economic efficiency by helping users maximize ingredients, minimizing waste, and stretching grocery budgets further.

On a larger scale, this product could contribute to economic benefits in the food industry by supporting more efficient consumption patterns, potentially reducing demand volatility in grocery supply chains. Additionally, the integration with an iPhone app presents opportunities for monetization through premium features, such as AI-driven meal planning, partnerships with grocery delivery services, or integrations with smart home ecosystems. As adoption grows, this product has the potential to create job opportunities in software development, hardware manufacturing, and customer support, contributing to economic activity in the tech and consumer goods industries.

William’s Status Report for February 9, 2025

This week, I focused on developing the foundation of our mobile application using React Native. I created a basic prototype that implements a subset of core features, allowing us to evaluate performance, usability, and integration with our backend services.

During the development process, I set up the project structure, configured necessary dependencies, and implemented initial UI components for core functionalities. Additionally, I explored integration with backend services to ensure smooth data retrieval and interaction with our recommendation system.

For the upcoming week, I will expand the prototype by integrating additional functionalities and refining the UI for a more polished user experience. I will also focus on improving performance, conducting more rigorous testing, and addressing any compatibility issues that arise. Additionally, I will begin incorporating computer vision capabilities into the app, exploring libraries and APIs to support this feature effectively.