Team Status Report for February 23 2025

1. Overview

Our project remains on track as we make significant progress across hardware, CV, and mobile application refinement. Our efforts were focused on expanding the dataset, optimizing our model, finalizing the design report, as well as improving the mobile app’s UI and backend integration. Though some tasks, such as the camera data transmission pipeline, are still in progress, the project remains on schedule. Next week, we will focus on fine-tuning our model, optimizing inference, and implementing key hardware and software components to seamlessly integrate Fridge Genie’s features.


2. Key Achievements

Hardware and Embedded Systems
  • Formally documented use cases, requirements and numerical specifications for our camera system.
  • Derived minimum field-of-view calculations to ensure full fridge coverage
Computer Vision
  • Collected and integrated new annotated fridge datasets to improve model performance
  • Applied data augmentation techniques to enhance robustness of model
  • Research and analyzed different YOLOv5 models to determine which model best meets our requirements
Mobile App Development
  • Improved navigation and layout for smoother user experience
  • Cleaned up existing codebase and resolved some minor bugs for enhanced stability
  • Explored libraries and APIs for integrating computer vision into the mobile app

3. Next Steps

Hardware and Embedded Systems
  • Complete data transmission pipeline between camera and Raspberry Pi
  • Begin motorized slider construction for improved scanning if hardware arrives
Computer Vision
  • Train and test YOLOv5x model with hyperparameter tuning to reach >90% detection accuracy
  • Explore model quantization and optimizations for Raspberry Pi to reduce inference time
  • Finalize model comparisons and select optimal YOLOv5 model
Mobile App Development
  • Continue backend optimizations for inventory management and data synchronization
  • Begin CV integration to the app
  • Backend development to optimize data storage and retrieval efficiency

4. Outlook

Our team is making good progress, with advancements in CV model training, hardware design and mobile app development. Our key challenges will include minimizing inference latency and finalizing hardware integration. For the next week, we will focus on fine-tuning our ML model, optimizing our inference pipeline and improving backend connectivity for data transfer between the mobile app and our model.

Part A: Global Factors (Will)

Our project addresses the global problem of food waste, which is estimated to cost the global economy $1 trillion per year. By implementing automated inventory tracking as well as expiration date alerts, our solution helps households reduce waste, which leads to more financial savings and greater food security. This extends beyond developed nations, as the system can be scaled for deployment in less-developed regions where food preservation is critical. Furthermore, the project provides global accessibility through its mobile-first design, which enables users in different countries to easily integrate it into their grocery management habits. Future iterations of our project could support multiple languages and localization to adapt to different markets. Last, our project directly supports environmental sustainability by reducing food waste, which accounts for around 10% of global greenhouse gas emissions.

Part B: Cultural Factors (Steven)

When developing our detection model and recipe recommendation, we took into account regional dietary habits and cultural food preferences. Different cultures have various staple foods, packaging and consumption patterns, thus the model must recognize diverse food types. For instance, a refrigerator in an East Asian household might contain more fermented foods such as kimchi/tofu, while a Western household might have more dairy products and processed foods.

While our initial product will be focused on American groceries and dietary habits, for future iterations, we will aim to support culturally relevant recipes. Users will be able to receive cooking suggestions that aligns with their dietary traditions and preferences. The user interface will also be designed to accommodate for individuals who are less technologically literate, enabling accessibility across different demographics.

Part C: ENVIRONMENTAL Considerations (Jun Wei)

Our project directly supports environmental sustainability by reducing food waste, which accounts for around 10% of global greenhouse gas emissions. By providing users with real-time grocery tracking and expiration notifications, we help reduce unnecessary grocery purchases and food disposal.

Furthermore, in terms of our hardware, we selected low-power consumption devices such as the Raspberry Pi Zero, which minimizes the system’s carbon footprint. Unlike traditional high-energy smart fridges, we offer an energy-efficient, cost-effective alternative that extends the lift of existing fridges instead of requiring consumers to purchase expensive IoT appliances.

For the long term, we could consider modifying our design to enable it to be retrofittable to most fridges that consumers currently have. This would make our solution more accessible and help reduce waste at a scaled-up level, in addition to preventing consumers from having to replace their existing fridges (which would in turn, have an added toll on greenhouse emissions). Working with industry stakeholders would also help expand the reach of our solution, benefiting not only individual consumers but also grocery stores, food banks, and restaurants.

Jun Wei’s Status Report for Feb 23 2025

1. Personal accomplishments for the week

1.1 Design report

For this week, most of my efforts were concentrated on producing the design report that was due on Feb 28. The use case and its associated requirements were formally presented in the form of an IEEE-style article. Doing the report also allowed me to formally derive numerical specifications relating to the camera system, specifically the minimum FOV required for our use case.

2. Progress status

I am currently on schedule and have completed the tasks set out for the week, apart from developing the data transmission pipeline from the camera to the RPi.

3. Goals for the upcoming week

  • Developing camera data transmission pipeline and lighting control
  • Testing image stitching (if the camera new arrives in time) / Begin construction of motorized camera slider (if the slider arrives in time)

Team Status Report for February 16, 2025

1. Overview

Our project continues to remain on track, as we make progress in hardware development, CV modeling and system integration. For this week, we made key advancements in model prototyping, hardware selection, and data acquisition. Our prototype object detection model was trained and tested with real-world fridge images, and we continued on finalizing hardware components, including camera configurations and Raspberry Pi integration. For our next steps we will focus on expanding datasets, optimizing the inference performance and integrating the detection pipeline into the full system.


2. Key Achievements

Hardware and Embedded Systems
  • Completed a top-level design framework for the project, ensuring seamless integration between hardware, computer vision and mobile components.
  • Ordered two additional Raspberry Pi boards to allow parallel processing and improved scalability
  • Tested the IMX219-160 camera with Raspberry Pi 4 Model B, confirming successful image capture
  • Identified issues with image stitching due to excessive FOV distortion, ordered IMX219-77 cameras for improved results
  • Finalized list of components required for prototype, including camera sliders for motorized scanning
Computer Vision
  • Trained initial YOLOv5 model on preliminary dataset of grocery items
  • Conducted real-world testing by capturing images from fridge and running inference on them
  • Successful detection of grocery item within fridge
Mobile App Development
  • Continued refining the React Native mobile application, improving UI elements and core inventory tracking features.
  • Researched potential libraries and APIs for computer vision integration, laying the groundwork for object recognition features.
  • Improved app navigation and interaction between the mobile app and backend services.

3. Next Steps

Hardware and Embedded Systems
  • Order camera slider mechanism to enable motorized scanning
  • Procure IMX219-77 cameras to replace previous model
  • Set up real-time data transfer between camera and Raspberry Pi
  • Evaluate multi-camera configurations for full fridge scanning
Computer Vision
  • Collect additional dataset of grocery images/ annotate training samples
  • Continue with model training to improve performance
  • Adjust model parameters/ experiment with other models to improve detection accuracy
Mobile App Development
  • Further refine the UI and improve responsiveness for a better user experience.
  • Optimize data retrieval and improve real-time synchronization between the app and backend services.
  • Begin initial work on integrating computer vision features into the app.
  • Conduct additional performance testing and address any remaining compatibility issues.

4. Outlook

Our team is making good progress across all aspects. Although we’ve had to acquire new hardware due to unforeseen challenges, we remain on schedule. In the coming weeks, we will shift more towards system integration, improving model accuracy, and optimizing hardware performance to ensure a fully functional prototype by the next milestone.

Jun Wei’s Status Report for Feb 16 2025

1. Personal accomplishments for the week

1.1 System design finalization

The system design was finalized in preparation for the design presentation. As I am in charge of the embedded components of the project, it made sense for me to develop the top-level design for the project.

Top-Level Design as of Feb 16

1.2 Finalizing parts to order

I also spent time requisitioning and ordering additional parts, namely Raspberry Pis (RPis). Specifically, I ordered two additional RPi Zeros after realizing that each RPi only has one Camera Serial Interface port. Thus, in additional to parallel decentralized processing, have one RPi per camera would allow for some scalability with regard to camera and motor control. The RPi Zero was also chosen because of its smaller form factor, allowing it to be placed within the confines of the refrigerator.

1.3 Testing the IMX219-160 (FOV 160°) camera

I also finally received the IMX219-160 camera ordered slightly over a week ago. I have hooked it up to the Raspberry Pi 4 Model B requisitioned from the ECE inventory and been able to capture some images. I have not tried stitching outputs from the camera, however, I believe its extreme FOV will not be ideal for image stitching. As such, I will proceed with ordering more cameras of a lower FOV (~ 80 degrees).

2. Progress status

I am currently on schedule and have completed the tasks set out for the week, apart from developing the data transmission pipeline from the camera to the RPi.

3. Goals for upcoming week

  • Order the camera slider
  • Order the IMX219-77 (FOV 79.3°) camera
  • Developing camera data transmission pipeline and lighting control
  • Testing image stitching (if the camera new arrives in time)

Jun Wei’s Status Report for Feb 9 2025

1. Personal accomplishments for the week

1.1 Image stitching experimentation

I experimented with image stitching to explore the feasibility of a one-camera solution. The rationale for the use of image stitching over merely relying on a having a high field of view (FOV) camera was to

  • Mitigate obstruction/obscurity from other objects; and
  • Gather information from different POVs (through multiple images).

I made use of the OpenCV library’s Stitcher_create() function in Python. OpenCV’s Stitcher class provides a high-level API with a built-in stitching pipeline that performs feature detection and mapping, as well as homography based estimation for image warping. I captured images with my smartphone camera, with both the regular (FOV 85˚) and the ultra-wide (FOV 120˚) lens. However, I found that image stitching failed on images taken with the latter. As such, I only have outputs from the regular FOV lens:

Stitched image outputs:

 

These were my learning points and takeaways

  • Image stitching is best suited for cameras with low FOVs as higher FOVs tend to warp features on the extreme end;
  • Images need some overlap for feature mapping (ideally around 1/3);
  • Too much overlap can lead to unwanted warping during stitching/duplicated features; and
  • Drastic changes in POV (either due to sparse image intervals or objects being extremely close to the camera, such as the plastic bottle above) can cause object duplication due to diminished feature mapping effectiveness.

For comparison, I have the following single high-FOV shot taken from my smartphone:

In all, I believe image stitching does confer significant advantages over a single high FOV shot:

  • More information captured (apples and blue container) obscured by transparent bottle in high FOV shot)
  • Reduced object warp/deformation, which is crucial for accurate object classification

Following this, a natural extension would be to explore effective image stitching pipeline implementations on an embedded platform, or even a real-time implementation.

2. Progress status

While I did not fill out the equipment requisition form as early in the week as I had hoped, I was able to get a head start on the imaging stitching algorithm that in turn, better informs decisions on 1) camera placement, and 2) frequency of image capture, and 3) desired camera FOV. I will defer the camera testing and data transmission pipelines to the coming week, which is when I will (hopefully) have received the equipment.

3. Goals for upcoming week

For the upcoming week, I would like to

  • Acquire the equipment outlined in my previous update
  • Test the camera within the confines of the fridge
  • Develop data transmission pipeline from the camera to the RPi
  • Develop transmission pipeline from RPi to cloud server (ownCloud), if time permits

Jun Wei’s Status Report for Feb 02 2025

1. Personal accomplishments for the week

1.1 Identification of equipment for procurement

Quantities of items below include spares

  • IMX219-160 (FOV 160°) camera
    Cost: $20 per unit
    Quantity: 3I did some research on cameras that would be suitable for our envisioned use case. Our use case imposes the following requirements for the camera system:

    • High color accuracy and dynamic range
    • High resolution — at least 5MP
    • High field of view (FOV)  — at least 120°
    • Small form factor — less than 2 inches in each dimension
    • Low power
    • Low noise
    • Low cost — less than $30 per camera

I decided that the IMX219-160 (FOV 160°) was a suitable choice of camera. In addition to exceeding cost saving expectations, it provides the added benefit of easy Raspberry Pi (RPi) integration. The possibility of adding flat ribbon connector extensions minimizes the profile of wires that are routed to the fridge exterior.

1.2 Camera Slider Design
I believed that a commercially built motorized slider might present integration issues with our RPi. This led to the decision to build the slider ourselves. I am referencing this DIY tutorial, we will only deviate from by using an RPi instead of an Arduino.

2. Progress status

I am currently on schedule and have completed the tasks set out for the week.

3. Goals for upcoming week

For the upcoming week, I would like to

  • Acquire the equipment outlined above
  • Test the camera within the confines of the fridge
  • Develop data transmission pipeline from the camera to the RPi
  • Develop transmission pipeline from RPi to cloud server (ownCloud), if time permits