Steven’s Status Report for Mar 16 2025

For this week, I continued with developing my core image processing pipeline. I conducted tests on object classification using the YOLOv5x model with some sample fridge images, and I collaborated with my team to finalize the list of our target food items for model training.  I always begun with integrating the CV pipeline with the Raspberry Pi to validate real-time image capture.

For my progress, I am currently on track with my project timeline. Preliminary object detection is functioning, and I continuing to work on improving the model frequency.  I am beginning on integrating my model with the edge processor.

For my goals for next week, I aim to continue improving the performance of my YOLOv5 model, as well as integrating the model with our Raspberry Pi.

William’s Status Report for Mar 9 2025

This week, I made minor refinements to the mobile application’s UI, fixing a few small visual inconsistencies and improving navigation flow slightly. While there weren’t any major changes, I spent some time reviewing previous work to ensure a smoother user experience.

In terms of computer vision integration, I didn’t make much headway beyond continuing to explore different libraries and APIs. I briefly looked into potential solutions but haven’t moved forward with implementation yet.

On the backend side, I reviewed some initial ideas for efficient data handling and cloud storage but haven’t made concrete progress.

Plans for Next Week

  • Continue refining UI/UX with small usability improvements.
  • Look deeper into computer vision tools and determine the most feasible approach.
  • Begin making incremental improvements to backend data handling and storage.

Overall, progress has been steady but slow, and I plan to pick up the pace in the coming week.

Jun Wei’s Status Report for Mar 9 2025

1. Personal accomplishments for the week

1.1 Data transmission pipeline

This week, I attempted to implement a rudimentary data transmission pipeline on the Raspberry Pi that 1) captures images and 2) uploads them to an Amazon Simple Storage Service (S3) database. I did so using the boto3 Python package, an AWS SDK that provides an API and low-level access to AWS Services. I have encountered some issues in transmitting the images to the S3 database wirelessly.

2. Progress status

I am slightly behind schedule as the data transmission pipeline is not fully operational. Ideally, I would have started work on the motorized camera system construction, but the camera sliders have not arrived yet. I also need to procure a Raspberry Pi 5 as the current Pi 4 only has one Camera Serial Interface (CSI) port.

3. Goals for the upcoming week

  • Debug camera data transmission pipeline and lighting control
  • Testing image stitching (if the camera new arrives in time) / Begin construction of motorized camera slider (if the slider arrives in time)

Steven’s Status Report for Mar 9 2025

For this week, I continued with data collection, expanding our datasets by importing annotated images which I have found online. I have also performed further fine-tuning of our training hyperparameters, testing different augmentations and hyperparameter optimizations in order to improve our detection accuracy, incorporating the expanded dataset in order to improve the training process. I then ran a preliminary test for our YOLOv5x model to test its detection accuracy. As expected, the YOLOv5x model performs at a higher accuracy, though it results in a longer inference time when run locally.

In term of progress, I am on track by improving model accuracy while also maintaining real-time performance.  The next step in our progress chart is integration, and I will begin work with integrating our model with the Raspberry Pi and the model app.

For next week, I aim to run inference on cloud and measure inference timing in order to quantitatively assess the latency of our model. I will also continue to work on training and enhancing our YOLOv5x model, and conduct more real-world tests to measure the performance of our optimized model.

William’s Status Report for Feb 23 2025

This week, I made some incremental progress on the mobile application in React Native. I focused primarily on refining the UI and ensuring a smoother user experience, making small adjustments to navigation and layout based on initial feedback. While I didn’t add many new features, I worked on cleaning up the existing code and fixing minor bugs to enhance overall stability.

I also started doing some preliminary research on integrating computer vision for object recognition but haven’t made significant progress yet. I explored a few libraries and APIs to get a better sense of what’s available and suitable for our needs but haven’t begun actual implementation.

For the upcoming week, I plan to continue refining the UI/UX gradually, look deeper into potential computer vision solutions, and make some progress on backend improvements, focusing on efficient data handling and cloud storage options. The project is moving forward at a manageable pace, and I aim to ramp up development gradually.

Steven’s Status Report for Feb 23 2025

For this week, I focused on expanding our existing dataset of annotated fridge images. I identified several datasets from online and made use of them to train and update our existing YOLOv5 model. Furthermore, I experimented with data augmentation techniques(i.e rotations/ occlusions) to improve the robustness of our model. Furthermore, I spent time conducting research on the different YOLOv5 models and their expected accuracy/latency for our design report, in order to determine which model will be optimized for our use.

In terms of progress, I am currently on schedule and have completed the development of the training pipeline in PyTorch, and am working on training the model with our datasets.

For next week, I will explore training with the YOLOv5x model with further hyperparameter tuning, with the aim of increasing our detection accuracy to beyond 90%.  I will also compare inference timings with and explore model quantization for Raspberry Pi optimizations, in order to identify the model which best meets our requirements.

Team Status Report for February 23 2025

1. Overview

Our project remains on track as we make significant progress across hardware, CV, and mobile application refinement. Our efforts were focused on expanding the dataset, optimizing our model, finalizing the design report, as well as improving the mobile app’s UI and backend integration. Though some tasks, such as the camera data transmission pipeline, are still in progress, the project remains on schedule. Next week, we will focus on fine-tuning our model, optimizing inference, and implementing key hardware and software components to seamlessly integrate Fridge Genie’s features.


2. Key Achievements

Hardware and Embedded Systems
  • Formally documented use cases, requirements and numerical specifications for our camera system.
  • Derived minimum field-of-view calculations to ensure full fridge coverage
Computer Vision
  • Collected and integrated new annotated fridge datasets to improve model performance
  • Applied data augmentation techniques to enhance robustness of model
  • Research and analyzed different YOLOv5 models to determine which model best meets our requirements
Mobile App Development
  • Improved navigation and layout for smoother user experience
  • Cleaned up existing codebase and resolved some minor bugs for enhanced stability
  • Explored libraries and APIs for integrating computer vision into the mobile app

3. Next Steps

Hardware and Embedded Systems
  • Complete data transmission pipeline between camera and Raspberry Pi
  • Begin motorized slider construction for improved scanning if hardware arrives
Computer Vision
  • Train and test YOLOv5x model with hyperparameter tuning to reach >90% detection accuracy
  • Explore model quantization and optimizations for Raspberry Pi to reduce inference time
  • Finalize model comparisons and select optimal YOLOv5 model
Mobile App Development
  • Continue backend optimizations for inventory management and data synchronization
  • Begin CV integration to the app
  • Backend development to optimize data storage and retrieval efficiency

4. Outlook

Our team is making good progress, with advancements in CV model training, hardware design and mobile app development. Our key challenges will include minimizing inference latency and finalizing hardware integration. For the next week, we will focus on fine-tuning our ML model, optimizing our inference pipeline and improving backend connectivity for data transfer between the mobile app and our model.

Part A: Global Factors (Will)

Our project addresses the global problem of food waste, which is estimated to cost the global economy $1 trillion per year. By implementing automated inventory tracking as well as expiration date alerts, our solution helps households reduce waste, which leads to more financial savings and greater food security. This extends beyond developed nations, as the system can be scaled for deployment in less-developed regions where food preservation is critical. Furthermore, the project provides global accessibility through its mobile-first design, which enables users in different countries to easily integrate it into their grocery management habits. Future iterations of our project could support multiple languages and localization to adapt to different markets. Last, our project directly supports environmental sustainability by reducing food waste, which accounts for around 10% of global greenhouse gas emissions.

Part B: Cultural Factors (Steven)

When developing our detection model and recipe recommendation, we took into account regional dietary habits and cultural food preferences. Different cultures have various staple foods, packaging and consumption patterns, thus the model must recognize diverse food types. For instance, a refrigerator in an East Asian household might contain more fermented foods such as kimchi/tofu, while a Western household might have more dairy products and processed foods.

While our initial product will be focused on American groceries and dietary habits, for future iterations, we will aim to support culturally relevant recipes. Users will be able to receive cooking suggestions that aligns with their dietary traditions and preferences. The user interface will also be designed to accommodate for individuals who are less technologically literate, enabling accessibility across different demographics.

Part C: ENVIRONMENTAL Considerations (Jun Wei)

Our project directly supports environmental sustainability by reducing food waste, which accounts for around 10% of global greenhouse gas emissions. By providing users with real-time grocery tracking and expiration notifications, we help reduce unnecessary grocery purchases and food disposal.

Furthermore, in terms of our hardware, we selected low-power consumption devices such as the Raspberry Pi Zero, which minimizes the system’s carbon footprint. Unlike traditional high-energy smart fridges, we offer an energy-efficient, cost-effective alternative that extends the lift of existing fridges instead of requiring consumers to purchase expensive IoT appliances.

For the long term, we could consider modifying our design to enable it to be retrofittable to most fridges that consumers currently have. This would make our solution more accessible and help reduce waste at a scaled-up level, in addition to preventing consumers from having to replace their existing fridges (which would in turn, have an added toll on greenhouse emissions). Working with industry stakeholders would also help expand the reach of our solution, benefiting not only individual consumers but also grocery stores, food banks, and restaurants.

Jun Wei’s Status Report for Feb 23 2025

1. Personal accomplishments for the week

1.1 Design report

For this week, most of my efforts were concentrated on producing the design report that was due on Feb 28. The use case and its associated requirements were formally presented in the form of an IEEE-style article. Doing the report also allowed me to formally derive numerical specifications relating to the camera system, specifically the minimum FOV required for our use case.

2. Progress status

I am currently on schedule and have completed the tasks set out for the week, apart from developing the data transmission pipeline from the camera to the RPi.

3. Goals for the upcoming week

  • Developing camera data transmission pipeline and lighting control
  • Testing image stitching (if the camera new arrives in time) / Begin construction of motorized camera slider (if the slider arrives in time)

Team Status Report for February 16, 2025

1. Overview

Our project continues to remain on track, as we make progress in hardware development, CV modeling and system integration. For this week, we made key advancements in model prototyping, hardware selection, and data acquisition. Our prototype object detection model was trained and tested with real-world fridge images, and we continued on finalizing hardware components, including camera configurations and Raspberry Pi integration. For our next steps we will focus on expanding datasets, optimizing the inference performance and integrating the detection pipeline into the full system.


2. Key Achievements

Hardware and Embedded Systems
  • Completed a top-level design framework for the project, ensuring seamless integration between hardware, computer vision and mobile components.
  • Ordered two additional Raspberry Pi boards to allow parallel processing and improved scalability
  • Tested the IMX219-160 camera with Raspberry Pi 4 Model B, confirming successful image capture
  • Identified issues with image stitching due to excessive FOV distortion, ordered IMX219-77 cameras for improved results
  • Finalized list of components required for prototype, including camera sliders for motorized scanning
Computer Vision
  • Trained initial YOLOv5 model on preliminary dataset of grocery items
  • Conducted real-world testing by capturing images from fridge and running inference on them
  • Successful detection of grocery item within fridge
Mobile App Development
  • Continued refining the React Native mobile application, improving UI elements and core inventory tracking features.
  • Researched potential libraries and APIs for computer vision integration, laying the groundwork for object recognition features.
  • Improved app navigation and interaction between the mobile app and backend services.

3. Next Steps

Hardware and Embedded Systems
  • Order camera slider mechanism to enable motorized scanning
  • Procure IMX219-77 cameras to replace previous model
  • Set up real-time data transfer between camera and Raspberry Pi
  • Evaluate multi-camera configurations for full fridge scanning
Computer Vision
  • Collect additional dataset of grocery images/ annotate training samples
  • Continue with model training to improve performance
  • Adjust model parameters/ experiment with other models to improve detection accuracy
Mobile App Development
  • Further refine the UI and improve responsiveness for a better user experience.
  • Optimize data retrieval and improve real-time synchronization between the app and backend services.
  • Begin initial work on integrating computer vision features into the app.
  • Conduct additional performance testing and address any remaining compatibility issues.

4. Outlook

Our team is making good progress across all aspects. Although we’ve had to acquire new hardware due to unforeseen challenges, we remain on schedule. In the coming weeks, we will shift more towards system integration, improving model accuracy, and optimizing hardware performance to ensure a fully functional prototype by the next milestone.

William Chen’s Status Report for Feb 16 2025

This week, I continued developing our mobile application in React Native, expanding on the initial prototype by integrating additional functionalities and refining the UI for a more polished user experience. I worked on enhancing the app’s core inventory tracking features, improving navigation flow, and ensuring seamless interaction between the front-end interface and backend services.

Additionally, I began exploring computer vision integration by researching suitable libraries and APIs for object recognition, preparing for its incorporation into our system.

For the upcoming week, I plan to further optimize the mobile application’s efficiency, refine UI/UX based on user feedback, and implement the initial phase of computer vision integration. I will also work on improving backend communication to enhance real-time data updates and explore cloud storage options for image processing. Work seems to be going on track, but I think most of the difficult tasks are yet to come.