Steven’s Status Report for Mar 23 2025

For this week, I continued with refining the training pipeline for the object detection model, in order to maximize our performance for the demo . I expanded our dataset with the Open Images database, filtering out food-related classes to our use case, significantly increasing our existing data. Furthermore, I am looking to experiment with the YOLOv10 model in order to further increase accuracy.

I am currently on track with our milestones. Preliminary training have been completed, and we are working to maximize our accuracy and finalize our model. I am also working on deploying our model onto the Raspberry Pi.

For next week, we aim to proceed with our demo, during which I aim to show our model with the optimal accuracy. I will continue working on integrating our model with the tentative pipeline for fridge item detection, and continue expanding our training dataset and tuning our parameters to optimize accuracy.

Team Status Report for Mar 23 2025

1. Overview

Our team made significant progress across all of the project components, with our main focus being preparing for the midterm demo. We refined our computer vision model, made significant progress on our motorized camera slider subsystem, and refined the user interface and user experience for our mobile application. Our challenges include difficulties of using our existing cameras and reliability issues with our cloud data pipeline.

2. Key Achievements

Hardware and Embedded Systems
  • We made progress on the motorized camera slider subsystem, constructing the slider and powering it with a Nema 17 stepper motor integrated with a Raspberry Pi
  • Completed calibration, optimized translation speed and stepping requirements for full traversal
  • Integrated ring light system to ensure consistent lighting throughout scan
Computer Vision
  • Refined the training pipeline for the object detection model, maximizing performance for demo
  • Completed preliminary training, working on optimizing hyperparameters and training sequence and maximizing accuracy
Mobile App Development
  • Improved user interface and user experience of our mobile app
  • Implemented several UI improvements to ensure smooth and visually coherent experience for demo purposes

3. Next Steps

Hardware and Embedded Systems
  • Add stable baseplate mount to slider to improve stability
  • Integrate camera system with slider and complete full system testing
Computer Vision
  • Continue expanding dataset and tuning hyperparameters to optimize accuracy
  • Integrate model with pipeline for fridge item detection
Mobile App Development
  • Implement recommendation system
  • Continue backend integration to improve data storage and transmission

4. Outlook

Our team is on track despite minor delays, and will present functioning demos showcasing hardware control as well as user interface. We will continue working on feature development and tighter integration between our hardware and software systems.

Jun Wei’s Status Report for Mar 23 2025

1. Personal accomplishments for the week

1.1 Motorized camera slider and lighting

This week, I constructed the motorized camera slider and integrated the ring light. The slider uses a Nema 17 stepper motor that interfaces with the Raspberry Pi via a A4988 driver. I managed to calibrate the motor and found the optimum translation speeds as well as number of steps required to traverse the entire slider. Triggering a scan causes the motor to traverse and stop multiple times along the length of slider for the camera to capture photos for stitching. At the same time, the ring light activates until the scan is complete.

Motorized camera slider
Ring light system

2. Progress status

I am slightly behind schedule as the motorized camera slider has not been integrated with the camera system yet.  This is because the appropriate camera (ordered 2 weeks ago) has yet to arrive. The system also needs to be moved to a smaller breadboard once fully integrated. The cloud data transmission pipeline developed, while functional, still presents some reliability issues.

3. Goals for the upcoming week

  • Add stable platform/baseplate mount to the camera slider

William’s Status Report for Mar 23 2025

Progress This Week

This week, I focused on working with my teammates in preparation for our midterm demo. I helped out where needed and spent some time polishing the mobile application’s UI to improve the overall look and feel. My efforts were mainly geared toward refining existing components and making final adjustments to enhance the user experience for the demo.

Plans for Next Week

  • Resume development on new features for the mobile app.

  • Revisit the computer vision integration research and begin narrowing down options.

  • Start backend integration for better data handling and storage efficiency.

This week was centered around ensuring a solid demo presentation, and I’m looking forward to picking up development momentum again in the coming days.

William’s Status Report for Mar 16 2025

This week, I focused primarily on maintaining the stability of the mobile application. While there weren’t any major updates or feature additions, I revisited portions of the codebase to ensure consistency and keep things organized for future development.

I continued to read through documentation for a few computer vision libraries but haven’t yet committed to a specific approach or tool. Similarly, backend progress was limited—most of my work involved reviewing previous plans and considering potential next steps for data storage and handling.

Plans for Next Week

  • Make incremental UI/UX refinements based on earlier feedback.

  • Narrow down a shortlist of viable computer vision tools to begin experimenting with.

  • Begin small backend updates focused on structuring data handling logic.

Overall, this was a lighter week in terms of visible progress, but it provided a useful opportunity to regroup and prepare for more development work moving forward.

Jun Wei’s Status Report for Mar 16 2025

1. Personal accomplishments for the week

1.1 Data transmission pipeline

This week, I continued debugging the data transmission pipeline on the Raspberry Pi that uses the boto3 Python package. The pipeline is able to reliably send files over, however, there appear to be latency issues with the time taken for transmission. I will have to investigate resizing the images before transmission.

2. Progress status

I am slightly behind schedule I have not started work on the motorized camera system construction. The data transmission pipeline developed, while functional, still presents some reliability issues.

3. Goals for the upcoming week

  • Begin construction of motorized camera slider

Steven’s Status Report for Mar 16 2025

For this week, I continued with developing my core image processing pipeline. I conducted tests on object classification using the YOLOv5x model with some sample fridge images, and I collaborated with my team to finalize the list of our target food items for model training.  I always begun with integrating the CV pipeline with the Raspberry Pi to validate real-time image capture.

For my progress, I am currently on track with my project timeline. Preliminary object detection is functioning, and I continuing to work on improving the model frequency.  I am beginning on integrating my model with the edge processor.

For my goals for next week, I aim to continue improving the performance of my YOLOv5 model, as well as integrating the model with our Raspberry Pi.

William’s Status Report for Mar 9 2025

This week, I made minor refinements to the mobile application’s UI, fixing a few small visual inconsistencies and improving navigation flow slightly. While there weren’t any major changes, I spent some time reviewing previous work to ensure a smoother user experience.

In terms of computer vision integration, I didn’t make much headway beyond continuing to explore different libraries and APIs. I briefly looked into potential solutions but haven’t moved forward with implementation yet.

On the backend side, I reviewed some initial ideas for efficient data handling and cloud storage but haven’t made concrete progress.

Plans for Next Week

  • Continue refining UI/UX with small usability improvements.
  • Look deeper into computer vision tools and determine the most feasible approach.
  • Begin making incremental improvements to backend data handling and storage.

Overall, progress has been steady but slow, and I plan to pick up the pace in the coming week.

Jun Wei’s Status Report for Mar 9 2025

1. Personal accomplishments for the week

1.1 Data transmission pipeline

This week, I attempted to implement a rudimentary data transmission pipeline on the Raspberry Pi that 1) captures images and 2) uploads them to an Amazon Simple Storage Service (S3) database. I did so using the boto3 Python package, an AWS SDK that provides an API and low-level access to AWS Services. I have encountered some issues in transmitting the images to the S3 database wirelessly.

2. Progress status

I am slightly behind schedule as the data transmission pipeline is not fully operational. Ideally, I would have started work on the motorized camera system construction, but the camera sliders have not arrived yet. I also need to procure a Raspberry Pi 5 as the current Pi 4 only has one Camera Serial Interface (CSI) port.

3. Goals for the upcoming week

  • Debug camera data transmission pipeline and lighting control
  • Testing image stitching (if the camera new arrives in time) / Begin construction of motorized camera slider (if the slider arrives in time)

Steven’s Status Report for Mar 9 2025

For this week, I continued with data collection, expanding our datasets by importing annotated images which I have found online. I have also performed further fine-tuning of our training hyperparameters, testing different augmentations and hyperparameter optimizations in order to improve our detection accuracy, incorporating the expanded dataset in order to improve the training process. I then ran a preliminary test for our YOLOv5x model to test its detection accuracy. As expected, the YOLOv5x model performs at a higher accuracy, though it results in a longer inference time when run locally.

In term of progress, I am on track by improving model accuracy while also maintaining real-time performance.  The next step in our progress chart is integration, and I will begin work with integrating our model with the Raspberry Pi and the model app.

For next week, I aim to run inference on cloud and measure inference timing in order to quantitatively assess the latency of our model. I will also continue to work on training and enhancing our YOLOv5x model, and conduct more real-world tests to measure the performance of our optimized model.