Team Status Report for Apr 26 2025

Progress This Week

Our team made meaningful progress across hardware, cloud integration, and software components in preparation for the final project deadline

  • Computer Vision & Cloud Integration: The YOLOv5 model has been finalized and benchmarked for accuracy and inference speed on the Raspberry Pi, it is deployed through the cloud pipeline.

  • Hardware & Imaging: The motorized camera slider with stepper control and ring light is operational and is linked with the camera system for synchronized scanning.

  • Mobile Application & Recommendation System: The recipe recommendation system is functional and integrated into the app’s UI. It makes use of a query based system to return the top recommendations.
    Current Status

  • Our project is on schedule with all major components nearing full integration. Our efforts are focused on fine-tuning the system to ensure seamless interaction between all components in order to prepare for the final demo.

Goals for next week

  • Finalize camera and cloud transmission pipeline
  • Complete backend-mobile app communication for fridge scans
  • End-to-end testing across modules
  • Complete final deliverables

Jun Wei’s Status Report for Apr 26 2025

1. Personal accomplishments for the week

1.1 Integration with cloud transmission pipeline

I have continued to work with Steven on integrating the motorized camera system with the cloud infrastructure he has been using for CV inference. There are some issues with getting transmitting the image to the cloud database in addition to latency constraints that need to be met.

1.2 Testing

I have been measuring the latencies of every stage of the image stitching and transmission pipeline, in addition to the stitching reliability rates.

2. Progress status

We are on track of tasks as we prepare for the final presentation for the coming week and continue with integration.

3. Goals for the upcoming week

  • Complete integration with cloud transmission pipeline (to work with Steven on this)
  • Complete integration with the smartphone application

Steven’s Status Report for Apr 26 2025

For this week, I made the final touches to our detection model. I finalized training runs for our YOLOv5x model using the expanded dataset, and measured the benchmark accuracy as well as inference speed on the Raspberry Pi. I have also set up the cloud pipeline to integrate our model inference pipeline, in preparation for the final demo.

Overall, I am on schedule with our updated project timeline. The model has been finalized, and I am helping to complete the integration of the model with the cloud pipeline, the Raspberry Pi as well as our mobile application.

For next week, we will focus on completing the deliverables, finalizing the integration between our components in order to achieve a successful final demo.

William’s Status Report for Apr 26 2025

Progress This Week

This week, I focused on putting the final finishing touches on the machine learning–based recipe recommendation system. The model now successfully generates recipe suggestions by comparing the user’s fridge contents and dietary preferences (converted into a query vector) against a database of recipe feature vectors based on ingredients and tags. The system scores similarity between these vectors and returns the top 5 most relevant matches.

I also completed integration of the recommendation system with the mobile app UI, ensuring that the results are displayed clearly and consistently. With this, the core functionality of the recommendation feature is now fully implemented and ready for demo.

Project Status

The project is on track to be completed within the next two days. The major components of the app—including user authentication, computer vision integration, backend data handling, and the recipe recommendation system—are now in place. Final testing and polishing are underway to ensure a smooth and cohesive user experience across all features.

Next Steps

  • Finish the project report and video

  • Prepare for final demo.

Team Status Report for Mar 23 2025

1. Overview

Our team made significant progress across all of the project components, with our main focus being preparing for the midterm demo. We refined our computer vision model, made significant progress on our motorized camera slider subsystem, and refined the user interface and user experience for our mobile application. Our challenges include difficulties of using our existing cameras and reliability issues with our cloud data pipeline.

2. Key Achievements

Hardware and Embedded Systems
  • We made progress on the motorized camera slider subsystem, constructing the slider and powering it with a Nema 17 stepper motor integrated with a Raspberry Pi
  • Completed calibration, optimized translation speed and stepping requirements for full traversal
  • Integrated ring light system to ensure consistent lighting throughout scan
Computer Vision
  • Refined the training pipeline for the object detection model, maximizing performance for demo
  • Completed preliminary training, working on optimizing hyperparameters and training sequence and maximizing accuracy
Mobile App Development
  • Improved user interface and user experience of our mobile app
  • Implemented several UI improvements to ensure smooth and visually coherent experience for demo purposes

3. Next Steps

Hardware and Embedded Systems
  • Add stable baseplate mount to slider to improve stability
  • Integrate camera system with slider and complete full system testing
Computer Vision
  • Continue expanding dataset and tuning hyperparameters to optimize accuracy
  • Integrate model with pipeline for fridge item detection
Mobile App Development
  • Implement recommendation system
  • Continue backend integration to improve data storage and transmission

4. Outlook

Our team is on track despite minor delays, and will present functioning demos showcasing hardware control as well as user interface. We will continue working on feature development and tighter integration between our hardware and software systems.

Jun Wei’s Status Report for Mar 23 2025

1. Personal accomplishments for the week

1.1 Motorized camera slider and lighting

This week, I constructed the motorized camera slider and integrated the ring light. The slider uses a Nema 17 stepper motor that interfaces with the Raspberry Pi via a A4988 driver. I managed to calibrate the motor and found the optimum translation speeds as well as number of steps required to traverse the entire slider. Triggering a scan causes the motor to traverse and stop multiple times along the length of slider for the camera to capture photos for stitching. At the same time, the ring light activates until the scan is complete.

Motorized camera slider
Ring light system

2. Progress status

I am slightly behind schedule as the motorized camera slider has not been integrated with the camera system yet.  This is because the appropriate camera (ordered 2 weeks ago) has yet to arrive. The system also needs to be moved to a smaller breadboard once fully integrated. The cloud data transmission pipeline developed, while functional, still presents some reliability issues.

3. Goals for the upcoming week

  • Add stable platform/baseplate mount to the camera slider

William’s Status Report for Mar 23 2025

Progress This Week

This week, I focused on working with my teammates in preparation for our midterm demo. I helped out where needed and spent some time polishing the mobile application’s UI to improve the overall look and feel. My efforts were mainly geared toward refining existing components and making final adjustments to enhance the user experience for the demo.

Plans for Next Week

  • Resume development on new features for the mobile app.

  • Revisit the computer vision integration research and begin narrowing down options.

  • Start backend integration for better data handling and storage efficiency.

This week was centered around ensuring a solid demo presentation, and I’m looking forward to picking up development momentum again in the coming days.

William’s Status Report for Mar 16 2025

This week, I focused primarily on maintaining the stability of the mobile application. While there weren’t any major updates or feature additions, I revisited portions of the codebase to ensure consistency and keep things organized for future development.

I continued to read through documentation for a few computer vision libraries but haven’t yet committed to a specific approach or tool. Similarly, backend progress was limited—most of my work involved reviewing previous plans and considering potential next steps for data storage and handling.

Plans for Next Week

  • Make incremental UI/UX refinements based on earlier feedback.

  • Narrow down a shortlist of viable computer vision tools to begin experimenting with.

  • Begin small backend updates focused on structuring data handling logic.

Overall, this was a lighter week in terms of visible progress, but it provided a useful opportunity to regroup and prepare for more development work moving forward.

William’s Status Report for Mar 9 2025

This week, I made minor refinements to the mobile application’s UI, fixing a few small visual inconsistencies and improving navigation flow slightly. While there weren’t any major changes, I spent some time reviewing previous work to ensure a smoother user experience.

In terms of computer vision integration, I didn’t make much headway beyond continuing to explore different libraries and APIs. I briefly looked into potential solutions but haven’t moved forward with implementation yet.

On the backend side, I reviewed some initial ideas for efficient data handling and cloud storage but haven’t made concrete progress.

Plans for Next Week

  • Continue refining UI/UX with small usability improvements.
  • Look deeper into computer vision tools and determine the most feasible approach.
  • Begin making incremental improvements to backend data handling and storage.

Overall, progress has been steady but slow, and I plan to pick up the pace in the coming week.

William’s Status Report for Feb 23 2025

This week, I made some incremental progress on the mobile application in React Native. I focused primarily on refining the UI and ensuring a smoother user experience, making small adjustments to navigation and layout based on initial feedback. While I didn’t add many new features, I worked on cleaning up the existing code and fixing minor bugs to enhance overall stability.

I also started doing some preliminary research on integrating computer vision for object recognition but haven’t made significant progress yet. I explored a few libraries and APIs to get a better sense of what’s available and suitable for our needs but haven’t begun actual implementation.

For the upcoming week, I plan to continue refining the UI/UX gradually, look deeper into potential computer vision solutions, and make some progress on backend improvements, focusing on efficient data handling and cloud storage options. The project is moving forward at a manageable pace, and I aim to ramp up development gradually.