Team Status Report for February 16, 2025

1. Overview

Our project continues to remain on track, as we make progress in hardware development, CV modeling and system integration. For this week, we made key advancements in model prototyping, hardware selection, and data acquisition. Our prototype object detection model was trained and tested with real-world fridge images, and we continued on finalizing hardware components, including camera configurations and Raspberry Pi integration. For our next steps we will focus on expanding datasets, optimizing the inference performance and integrating the detection pipeline into the full system.


2. Key Achievements

Hardware and Embedded Systems
  • Completed a top-level design framework for the project, ensuring seamless integration between hardware, computer vision and mobile components.
  • Ordered two additional Raspberry Pi boards to allow parallel processing and improved scalability
  • Tested the IMX219-160 camera with Raspberry Pi 4 Model B, confirming successful image capture
  • Identified issues with image stitching due to excessive FOV distortion, ordered IMX219-77 cameras for improved results
  • Finalized list of components required for prototype, including camera sliders for motorized scanning
Computer Vision
  • Trained initial YOLOv5 model on preliminary dataset of grocery items
  • Conducted real-world testing by capturing images from fridge and running inference on them
  • Successful detection of grocery item within fridge
Mobile App Development
  • Continued refining the React Native mobile application, improving UI elements and core inventory tracking features.
  • Researched potential libraries and APIs for computer vision integration, laying the groundwork for object recognition features.
  • Improved app navigation and interaction between the mobile app and backend services.

3. Next Steps

Hardware and Embedded Systems
  • Order camera slider mechanism to enable motorized scanning
  • Procure IMX219-77 cameras to replace previous model
  • Set up real-time data transfer between camera and Raspberry Pi
  • Evaluate multi-camera configurations for full fridge scanning
Computer Vision
  • Collect additional dataset of grocery images/ annotate training samples
  • Continue with model training to improve performance
  • Adjust model parameters/ experiment with other models to improve detection accuracy
Mobile App Development
  • Further refine the UI and improve responsiveness for a better user experience.
  • Optimize data retrieval and improve real-time synchronization between the app and backend services.
  • Begin initial work on integrating computer vision features into the app.
  • Conduct additional performance testing and address any remaining compatibility issues.

4. Outlook

Our team is making good progress across all aspects. Although we’ve had to acquire new hardware due to unforeseen challenges, we remain on schedule. In the coming weeks, we will shift more towards system integration, improving model accuracy, and optimizing hardware performance to ensure a fully functional prototype by the next milestone.

William Chen’s Status Report for Feb 16 2025

This week, I continued developing our mobile application in React Native, expanding on the initial prototype by integrating additional functionalities and refining the UI for a more polished user experience. I worked on enhancing the app’s core inventory tracking features, improving navigation flow, and ensuring seamless interaction between the front-end interface and backend services.

Additionally, I began exploring computer vision integration by researching suitable libraries and APIs for object recognition, preparing for its incorporation into our system.

For the upcoming week, I plan to further optimize the mobile application’s efficiency, refine UI/UX based on user feedback, and implement the initial phase of computer vision integration. I will also work on improving backend communication to enhance real-time data updates and explore cloud storage options for image processing. Work seems to be going on track, but I think most of the difficult tasks are yet to come.

Jun Wei’s Status Report for Feb 16 2025

1. Personal accomplishments for the week

1.1 System design finalization

The system design was finalized in preparation for the design presentation. As I am in charge of the embedded components of the project, it made sense for me to develop the top-level design for the project.

Top-Level Design as of Feb 16

1.2 Finalizing parts to order

I also spent time requisitioning and ordering additional parts, namely Raspberry Pis (RPis). Specifically, I ordered two additional RPi Zeros after realizing that each RPi only has one Camera Serial Interface port. Thus, in additional to parallel decentralized processing, have one RPi per camera would allow for some scalability with regard to camera and motor control. The RPi Zero was also chosen because of its smaller form factor, allowing it to be placed within the confines of the refrigerator.

1.3 Testing the IMX219-160 (FOV 160°) camera

I also finally received the IMX219-160 camera ordered slightly over a week ago. I have hooked it up to the Raspberry Pi 4 Model B requisitioned from the ECE inventory and been able to capture some images. I have not tried stitching outputs from the camera, however, I believe its extreme FOV will not be ideal for image stitching. As such, I will proceed with ordering more cameras of a lower FOV (~ 80 degrees).

2. Progress status

I am currently on schedule and have completed the tasks set out for the week, apart from developing the data transmission pipeline from the camera to the RPi.

3. Goals for upcoming week

  • Order the camera slider
  • Order the IMX219-77 (FOV 79.3°) camera
  • Developing camera data transmission pipeline and lighting control
  • Testing image stitching (if the camera new arrives in time)

Steven’s Status Report for February 16, 2025

For this week, I focused on creating a prototype of the model we will use for detection and classification of grocery items. To accomplish this, I trained an initial YOLOv5 model on a small preliminary dataset of grocery items. I made use of a diverse dataset, in order to ensure basic object detection and classification functionality.

After training, I did some initial tests, using an image of a cake within the fridge that we are using. I ran the model on the image to evaluate the performance of the prototype model, below are the results.

As can be seen, the preliminary model was already quite successful in identifying the bounding box of the item, though the classification was not completely accurate, it is understandable since it was a niche item(cake from Giant Eagle).

I am currently making good progress with respect to our Gantt chart, and have started training the model slightly ahead of schedule. For following weeks, I aim to continue data collection by finding more datasets of fridge images, as well as exploring training with annotated data of images of groceries taken within our fridge. I also aim to fine-tune the model through adjusting hyperparameters and increasing training size through data augmentation to increase accuracy. I will also experiment with different variations of YOLOv5 models to see if a larger model will yield better accuracy without major latency trade-offs. I will also measure latency in terms of local and cloud run inference, in order to see which one better suits our requirements for cost and latency.

 

Team Status Report for February 9, 2025

1. Overview

Our project remains on track, with progress across hardware, computer vision (CV), and mobile development efforts. This week, significant advancements were made in image processing, model prototyping, and mobile app development. While some hardware acquisition was delayed, progress in algorithm development and system integration ensures that we are still aligned with our overall timeline.


2. Key Achievements

Hardware and Embedded Systems
  • Conducted initial experimentation with image stitching to assess the feasibility of a single-camera solution.
  • Explored OpenCV’s Stitcher_create() function, identifying key constraints such as the need for image overlap and the drawbacks of high-FOV cameras in feature mapping.
  • Gained insights into optimal camera placement, frequency of image capture, and required FOV for effective image processing.
  • Prepared for upcoming camera and data transmission pipeline testing once equipment is acquired.
Computer Vision
  • Set up the PyTorch environment and implemented an initial YOLOv5 prototype.
  • Developed a preprocessing pipeline using OpenCV and conducted preliminary training on small annotated grocery item datasets.
  • Started integrating external datasets from Kaggle and Roboflow to improve model accuracy and robustness.
  • Began benchmarking inference speed on local (Raspberry Pi) and cloud-based setups to measure latency and performance trade-offs.
Mobile App Development
  • Built the foundation of the React Native mobile application, implementing a core feature prototype.
  • Configured project dependencies and set up the UI framework for key functionalities.
  • Explored integration with backend services to support inventory tracking and recommendation features.
  • Prepared for the integration of computer vision capabilities in future iterations.

3. Next Steps

Hardware and Embedded Systems
  • Acquire necessary equipment, including the Raspberry Pi, camera, and related components.
  • Test camera performance within a fridge environment and optimize settings for optimal image capture.
  • Develop and test data transmission pipelines between the camera, Raspberry Pi, and cloud storage (ownCloud).
Computer Vision
  • Complete dataset collection and refine annotation processes for improved training quality.
  • Conduct further testing on local vs. cloud inference to optimize system efficiency.
  • Integrate YOLOv5 model with the mobile application and test its effectiveness in real-world conditions.
Mobile App Development
  • Expand the prototype by implementing additional features and refining UI/UX.
  • Improve app performance and address any identified compatibility issues.
  • Begin integrating computer vision functionalities and explore API options for seamless connectivity.

4. Outlook

The team is making steady progress across all development fronts. While hardware testing was slightly delayed due to equipment requisition timing, the early progress in software-based experimentation ensures we remain aligned with our overall goals. As we move forward, the focus will shift toward system integration, testing, and optimization to create a fully functional prototype in the coming weeks.

Part A: Public Health, Safety, and Welfare (Will)

Our fridge camera system enhances public health by promoting better nutrition and reducing food waste. By providing real-time tracking of fridge contents, users can make more informed decisions about meal planning, ensuring they consume fresh and balanced meals. The system also helps prevent the consumption of expired or spoiled food, reducing the risk of foodborne illnesses. Additionally, by suggesting recipes based on available ingredients, the product encourages healthier eating habits by making meal preparation more convenient and accessible.

In terms of safety, the automated rail system is designed with smooth and secure movements to prevent accidents such as knocking over or damaging fridge items. The device is built with user-friendly controls and safety mechanisms to ensure it does not pose a hazard to household members. The system also enhances welfare by addressing basic food security concerns—by reducing food waste and maximizing ingredient use, households can make their groceries last longer, ultimately benefiting those on tight budgets or with limited access to fresh food.

Part B: Social Considerations (Steven)

Socially, this product addresses the needs of diverse households, including busy professionals, families, and individuals who struggle with meal planning. For families, the ability to track fridge contents remotely ensures better coordination in grocery shopping, preventing unnecessary purchases and reducing food waste. The recipe recommendation feature also helps bring people together by facilitating home-cooked meals, fostering stronger family and social bonds.

For individuals with dietary restrictions or cultural food preferences, the app can be tailored to suggest recipes that align with their specific needs, promoting inclusivity and personalization. Additionally, the product encourages sustainability by helping consumers become more mindful of their consumption habits, aligning with broader social movements that advocate for reducing food waste and promoting environmentally responsible living.

Part C: Economic Considerations (Jun Wei)

From an economic perspective, the fridge camera system helps households save money by reducing food waste and unnecessary grocery purchases. By keeping an up-to-date inventory of fridge contents, users can avoid buying duplicate items and make more efficient use of what they already have. The recipe recommendation feature further enhances economic efficiency by helping users maximize ingredients, minimizing waste, and stretching grocery budgets further.

On a larger scale, this product could contribute to economic benefits in the food industry by supporting more efficient consumption patterns, potentially reducing demand volatility in grocery supply chains. Additionally, the integration with an iPhone app presents opportunities for monetization through premium features, such as AI-driven meal planning, partnerships with grocery delivery services, or integrations with smart home ecosystems. As adoption grows, this product has the potential to create job opportunities in software development, hardware manufacturing, and customer support, contributing to economic activity in the tech and consumer goods industries.

William’s Status Report for February 9, 2025

This week, I focused on developing the foundation of our mobile application using React Native. I created a basic prototype that implements a subset of core features, allowing us to evaluate performance, usability, and integration with our backend services.

During the development process, I set up the project structure, configured necessary dependencies, and implemented initial UI components for core functionalities. Additionally, I explored integration with backend services to ensure smooth data retrieval and interaction with our recommendation system.

For the upcoming week, I will expand the prototype by integrating additional functionalities and refining the UI for a more polished user experience. I will also focus on improving performance, conducting more rigorous testing, and addressing any compatibility issues that arise. Additionally, I will begin incorporating computer vision capabilities into the app, exploring libraries and APIs to support this feature effectively.

Steven’s Status Report for February 9, 2025

For this week, I focused on data collection and annotations, as well as getting started on the prototype of our model.

I’ve set up our PyTorch environment and completed an initial YOLOv5 prototype. I’ve developed a basic pipeline, where images are pre-processed with OpenCV and then fed to the YOLOv5 model, and conducted some preliminary training and testing using small annotated datasets of grocery items. Furthermore, I have been sourcing and noting down relevant datasets on Kaggle/ Roboflow.

Currently, I am working on integrating the online datasets with the YOLOv5 model, and conducting some initial tests on accuracy as well as inference speed. I aim to test the inference speed locally on Raspberry Pi, as well as on cloud, to get a measure of latency of either set-up.   I will also experiment with image processing methods using images taken from our fridge, in order to try and improve detection accuracy.

Looking to the future, I will have to obtain the necessary hardware(Raspberry Pi) in order to test the effectiveness of our model when run locally. I will also have to work on integrating the model with our peripheral device as well as the mobile application.

Jun Wei’s Status Report for Feb 9 2025

1. Personal accomplishments for the week

1.1 Image stitching experimentation

I experimented with image stitching to explore the feasibility of a one-camera solution. The rationale for the use of image stitching over merely relying on a having a high field of view (FOV) camera was to

  • Mitigate obstruction/obscurity from other objects; and
  • Gather information from different POVs (through multiple images).

I made use of the OpenCV library’s Stitcher_create() function in Python. OpenCV’s Stitcher class provides a high-level API with a built-in stitching pipeline that performs feature detection and mapping, as well as homography based estimation for image warping. I captured images with my smartphone camera, with both the regular (FOV 85˚) and the ultra-wide (FOV 120˚) lens. However, I found that image stitching failed on images taken with the latter. As such, I only have outputs from the regular FOV lens:

Stitched image outputs:

 

These were my learning points and takeaways

  • Image stitching is best suited for cameras with low FOVs as higher FOVs tend to warp features on the extreme end;
  • Images need some overlap for feature mapping (ideally around 1/3);
  • Too much overlap can lead to unwanted warping during stitching/duplicated features; and
  • Drastic changes in POV (either due to sparse image intervals or objects being extremely close to the camera, such as the plastic bottle above) can cause object duplication due to diminished feature mapping effectiveness.

For comparison, I have the following single high-FOV shot taken from my smartphone:

In all, I believe image stitching does confer significant advantages over a single high FOV shot:

  • More information captured (apples and blue container) obscured by transparent bottle in high FOV shot)
  • Reduced object warp/deformation, which is crucial for accurate object classification

Following this, a natural extension would be to explore effective image stitching pipeline implementations on an embedded platform, or even a real-time implementation.

2. Progress status

While I did not fill out the equipment requisition form as early in the week as I had hoped, I was able to get a head start on the imaging stitching algorithm that in turn, better informs decisions on 1) camera placement, and 2) frequency of image capture, and 3) desired camera FOV. I will defer the camera testing and data transmission pipelines to the coming week, which is when I will (hopefully) have received the equipment.

3. Goals for upcoming week

For the upcoming week, I would like to

  • Acquire the equipment outlined in my previous update
  • Test the camera within the confines of the fridge
  • Develop data transmission pipeline from the camera to the RPi
  • Develop transmission pipeline from RPi to cloud server (ownCloud), if time permits

Team Status Report for February 02, 2025

1. Overview
Our project is on schedule across hardware, computer vision (CV), and mobile development efforts.

2. Key Achievements

  • Hardware and Embedded:
    • Selected the IMX219-160 camera for its wide FOV, low cost, and easy Raspberry Pi integration.
    • Planned a DIY motorized camera slider, ensuring custom control and reduced integration challenges.
  • Computer Vision:
    • Chose OpenCV for image preprocessing and YOLOv5 for object detection, balancing real-time performance and accuracy.
    • Preparing a dataset of fridge items for initial model training and testing.
  • Mobile App:
    • Evaluated multiple development options, decided on React Native due to robust community support and cross-platform benefits.
    • Outlined a prototype focusing on core inventory features and system integration.

3. Next Steps

  • Hardware and Embedded:
    • Purchase and test the camera, LED ring light, and slider components in a fridge setup.
    • Start assembly of the DIY slider and interface it with the Raspberry Pi.
  • Computer Vision:
    • Finalize dataset collection and begin training YOLOv5 on a sample set.
    • Explore and benchmark local vs. cloud inference options for efficiency and scalability.
  • Mobile App:
    • Implement a basic React Native prototype with essential UI elements and navigation.
    • Integrate data retrieval and display features, testing on both iOS and Android.

4. Outlook
The team is on schedule with our project. We will continue refining each component to ensure seamless integration and a functional system in the upcoming weeks.

William’s Status Report for February 02, 2025

For this week, I focused on researching the relevant mobile development frameworks and technlogies to determine how best to build our app. I explored native development like Swift and Kotlin and also cross-platform solutions like React Native and Flutter. After assessing their pros and cons in terms of performance, community support, and integration with our embedded systems, I decided to employ React Native for our mobile application due to its extensive documentation, large active developer community, and proven efficiency for cross-platform development. React Native also integrates well with various backend services and libraries, making it easier to incorporate our recommendation system features and functionalities.

For my next steps, I will create a basic prototype of our mobile interface using React Native on a small subset of core features. This will help us evaluate performance and speed, and discover potential challenges. For the next werek, I will focus on coding the foundation of the React Native app and setting up initial testing to confirm performance benchmarks.