Siena’s Status Report for 10/18

Accomplishments
At the start of the week, our team divided the design report into sections. My part was specifically on system architecture, implementation, project management, risk mitigation, and summary (please refer to the corresponding sections of our design report). While we had big ideas in place, I realized that we needed more concrete plans when actually writing the report. I thus refined our hardware architecture/implementation diagrams, defined how our button controls were going to work, and detailed the hardware-software as well as peripheral interface. In addition, by following the CAD model, I added more products to our parts list.

Our parts didn’t arrive yet, but I started working with the RPi camera module for now. I discussed with Isaiah on the methods we want to use to pass the frames to the software side (function call, direct software management, etc).

I also reached out to the IRB office, explaining our plans for our user study and what processes we need to go through in order to get it approved. The email is included below.

Schedule
Unfortunately, we are currently behind schedule. Our parts are not here, and this has been our bottleneck since the last week. We are discussing plans to cut down the time reserved for our user study from 2 weeks to 1. This way, we’ll have another week to actually work on our product.

Deliverables
Hopefully by next week, our parts would be here. My goal would be to have the camera working and feeding in correctly to our Magic Mirror app.

Siena’s Status Report for 10/4

Accomplishments

This week, I worked on building our parts list from our discussion based off of our design review slides from last week. Our group had already ordered a RPi 5 from the inventory, so we ordered additional components like our camera, LCD, LEDs, and more. Aside from parts, I was messing around with the RPi we picked up. Although there wasn’t much that I could do without the rest of our parts, I did extensive research as to how we will integrate the camera and buttons. Here are the main resources for setup, and I have also created an initial code based off of them to take a picture when a button is pressed. It has not been tested yet, but I will do that when everything gets here. 

Parts Status:

Resources for buttons for control: https://gpiozero.readthedocs.io/en/stable/recipes.html#button

Resources for camera setup: https://docs.arducam.com/Raspberry-Pi-Camera/Native-camera/16MP-IMX519/#products-list 

Schedule

For week 6, we are on schedule. We have ordered all the parts that we need in order to proceed with our projects, and I have looked into different ways that I can bring up the camera and button, as well as how they communicate with each other and integrate through the RPi.

Next Week

In the next week, I think our group’s greatest concern is finishing up our design review documents. On a personal level, I wish to be able to bring up the camera and control it with buttons when the parts get here.



Siena’s Status Report for 9/27

Accomplishments

This week, our group held extensive discussions to solidify our implementation plans, ranging from component selection to defining high-level project goals. I was primarily responsible for the hardware design, focusing on how our chosen single-board computer would serve as the central hub of the magic mirror. This included planning hardware interfaces and ensuring it could host both our system application and the ML component.

Working closely with Isaiah, who is leading the software side, I finalized a condensed design plan after several iterations. We revisited our design goals multiple times, aligning on the user experience we wanted to deliver. While creating a detailed diagram, I also researched relevant resources and libraries that could support our implementation. After several drafts, we arrived at a finalized version of the design plan:

Schedule

I am currently on track. This week’s goal was to finalize our parts list and research how the board would interface with components such as the control buttons and camera, which has been completed.

Next Week

Next week, we expect to receive some of the parts we ordered. Since we will at least have access to the Raspberry Pi and camera from the capstone inventory, I plan to begin testing simple connections with the camera. This will help me gauge if my interface plans were sufficient in regards to the camera and decide on next steps. 



Team Status Report for 9/27

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

This week, we worked mostly on coming up with detailed hardware and software designs. As we were integrating the two and discussing with our mentors, we improved the design so that it is more fit for the purpose and target audience of our product. For example, we decided to leave out the accelerator and see if we meet our use case requirements first. There is always a possibility that we might not reach that and have us adjust the design to accommodate its usage. 

Additionally, we all lack experience in actually building physical products. We expect difficulties during this process, so we made a CAD for general guidelines and solidified the design and dimensions. 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We decided on the board for our product: RPi 5 8GB. Although we thought about including an additional accelerator, we realized that our product does not require a heavy ML workload for us to need an accelerator. To keep our design cheap and simple, we decided to attempt our ML processing on just the RPi. 

Our group also decided to include an app (we’re thinking about an on system app for now). This app is displayed on our LCD display to show the current session’s skin analysis as well as data stored from past user sessions. 

Training on the whole image datasets proved difficult, with poor generalization. This causes a shift to training with close up skin-patched based methods. These have seen more success in published papers. This requires a higher resolution camera, however that is very much within specs. The newer models of Raspberry Pi-compatible cameras all have at least 12MP resolution, which is plenty for accurate skin patches. This does have the side effect of requiring the user to be closer to the mirror than previously projected, ~40cm. However this is still within standard desktop mirror distances. Fine-tuning was not in schedule for this week, so we’re still on track in that regards.

  • Provide an updated schedule if changes have occurred.

We’re on track as we finalized our parts list and compiled a detailed implementation plan with block diagrams. We have also planned out our communication protocols for our device interface. 



Team Status Report for 9/20

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk as of now is to ensure that we are able to deploy our model on the board we choose. For a raspberry pi + AI kit, using an external accelerator and connecting that to our main processor will be a tedious process, and from previous experience there is a lot of debugging involved. As for the Nvidia Jetson Nano, it will be much more convenient since the GPUs are included in the kit, but none of us have experience with using the board, so it’ll be a challenge to ensure that we can get the board up and running and have all the software stack set up. Another challenge is to ensure that all of our input/output data processing can be done. Once we decide on the camera and the LCD displays, writing the scripts for image processing (to feed into our model) and producing output data (analysis on each of the classes) can be challenging, since we do not have embedded experience with these boards. We are currently looking into 2 different boards with compatible cameras/lcd displays so that we have an alternative plan. However, because the parts are expensive, we want our first choice to work and are doing extensive research in open source CV projects that work with the boards we researched.

Another potential risk is the classification accuracy on the oiliness detection task. Unlike the other tracked factors, oiliness is much less visually clear. This could potentially result in a sub-85% classification error when trained only on the initial datasets. To counteract this, if model performance is below our expectations, we will gather new data points through web-scraping, and potentially new images we collect ourselves. Models to detect oiliness have been implemented prior, so likely gathering more data will drive down the error rates.

The datasets available for skin burns are either locked behind stock image distributors paywalls, or contain many images of much greater severity, and also don’t solely focus on the face. Potentially, this could cause a bias in our sunburn detection model to only classify significant burns. To counteract this, we will augment the existing dataset we were using with images of sunburns scraped from google images. Google image results for sunburns are much better aligned with the goals of our system, and match typical levels of sunburn people face in their day-to-day life.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We changed our computer vision model backbone from MobileNetV2 to MobileNetV3-small. MobileNetV3 is a more modern iteration, and has better performance on edge devices, while retaining similar classification accuracy (defined on imagenet top-1 and top-5 classification rate).

Upon further research, our team is now considering removing the two MCU units in our hardware block diagram. The goal of the project can be achieved with just a single computer board directly interfacing with our camera inputs and LCD outputs. This adjustment simplifies the architecture itself, but the complexity of the project remains as we are still interfacing with the same hardware components from the plan.

This design change, however, requires corresponding updates to our communication protocol and power supply to ensure compatibility with the connected devices. In addition, if we choose to work with a Raspberry Pi 5, we will have an external AI accelerator to meet our use-case requirement of displaying results on the LCD within the 7 seconds from the user being positioned correctly.

Provide an updated schedule if changes have occurred.

This week, we originally intended to finalize selecting the different hardware components for our project. However, upon research, we realized that a discussion with our advisors will be needed in order to make a decision and compile a parts list. This influenced our search for libraries as they will depend on which single board computer we select for our project. We will be catching up to our schedule until Wednesday of Week 5.

Siena’s Status Report for 9/20

Accomplishments

This week, I researched the hardware components for our project such as the single-board computer (SBC), camera, and LCD. My main responsibility was evaluating the Raspberry Pi as a potential system controller. During this process, I realized that the two microcontrollers (MCUs) in our original design were unnecessary, since I found no trouble in connecting both the camera and LCD directly to the SBC. I also identified an AI accelerator plug-in to better custom the Raspberry Pi with our needs. Meanwhile, one of my teammates researched the NVIDIA Jetson, and we plan to discuss both options with our advisors next week.

In addition, I researched treatment approaches for the four skin conditions we will analyze (acne, oiliness, wrinkles, and sunburn) by reviewing online resources and recommendations.

My work is here Week 4

Schedule

We are slightly behind schedule because our group wanted guidance on selecting the SBC and needed approval for our revised hardware plan (which removes the two MCUs). To ensure we can catch up quickly, my groupmate and I have researched both SBC options under consideration, along with the corresponding camera and LCD for each. With this groundwork completed, we expect to be back on track by early next week.

Next Week

Next week, we aim to finalize our parts list, get the updated diagram approved, and begin working on the communication protocols for the selected components. I plan to compile a list of relevant SDKs and draft an initial plan for how to implement them.