Team Status Report for 10/25

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Our most significant risk for hardware persists from last week: our parts are still not here. However, through some snooping, we were able to find a few replacement parts available on campus (for example, a Camera Module 3 and a button). We have started working with them this week, but how we bring up the parts we ordered and some APIs may differ. This is now our greatest concern in terms of our hardware progress. To mitigate this challenge, we have been looking into the differences between our temporary replacement parts and our desired ones and finding their counterpart functions and more scalable interactions between the software. 

The most significant risk in software persists as well, the accuracy of the models. Similarly, the contingency plans involve gathering more data as explained in the design report/previous status reports. Additionally, if the sunburn model becomes exceedingly difficult to train effectively, this could cause the problem to get off schedule. The contingency plan mainly involves going back to gathering more data, but if that doesn’t solve the problem, then re-reviewing existing solutions and directly implementing them would follow.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes were made to our existing design in terms of hardware. 

We have been sticking to our plan this week!

  • Provide an updated schedule if changes have occurred.

No change in schedule!





Isaiah Weekes Status Report for 10/25

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficienteffort into the project over the course of the week (12+ hours)

A segment of this week was spent on the ethics assignment, but I also put time into training and developing the wrinkle model. This model generates a mask/set of wrinkles, which then can be thresholded to determine skin condition/wrinkle count.

Face image

Wrinkles on face

Testing for a proper threshold for wrinkle count/density will be needed to ensure accurate analysis.

One thing to note is that we are a little off schedule, as we planned to have all four models done, and only three are. This is all still before the building of the mirror, so it’s not acting as a bottleneck, but the sunburn model will need to be done by the end of next week to ensure the project can move forward smoothly.

Corin’s Status Report for 10/25

Accomplishments

This week, my parts for the display did not arrive yet, so I just worked on the raspberry pi to experiment on some app setups and integrating a button. I hooked up a monitor with the RPI to experiment on the GUI – which will exactly be the same once our display arrives. We wanted the user to initiate the analysis and to know that s/he properly did. I connected the button to the RPI and coded up a very basic button – display interaction to test that the user interface is intuitive. We haven’t started integrating the camera yet, but the next step would be to also connect the button for image capture and to display done analysis and recommendations when our model is done.

Before Button Press

After Button Press

Schedule

Next week, I’m planning to work with Siena to first fully integrate the camera with button and the app, then I will work with Isaiah to start the later part of the app – displaying done & recommendations.

 

Deliverables

Hopefully a camera – button – app integrated system that acknowledges button_pressed, captures image, and signals mid analysis on the app. If time permits, also a done_analysis triggered by the model and recommendation page for the app.

 

Siena’s Status Report for 10/25

Accomplishments

Unfortunately, our parts are still not here 🙁 I’ve been trying to find ways around this, and I have started working with a different camera (Camera Module 3). Although the process and the API might be a little different, I was still able to bring up the camera. Through this, our team was able to discuss ways how we will actually interface with our system’s app. We are still trying out different methodologies to find one that works best for us. 

I also put in orders for now our exterior of our project (wood and two-way mirror glass) and other tools we need to put it together. 

Last but not least, I’ve been communicating with the IRB Review. After explaining our capstone project and the purpose of our user study, they have sent us a checklist to work through with our advisor. I have worked through the checklist and am now waiting for our next week’s meeting where we can get the advisor approval. 

Course-Related Student Project Checklist – 1-4-22 – FINAL

Schedule

After we have altered the schedule last week, I think we are currently on track. 

Deliverables

Next week, I hope we can run our ML model with the inputs from the camera (it’ll be even better if we can work on this with the camera that we ordered, which has higher resolution), have it interface with memory, and collect some results on the analysis. 

Isaiah’s Status Report for 10/18

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficienteffort into the project over the course of the week (12+ hours)

A large amount of time this week was spent with drafting the Design Report. Besides that, a majority of the time remaining was spent training and developing the machine learning models for the skincare analysis. In particular the acne detection model took a large amount of the time. Developing the code for the YOLO used in the acne detection was a more difficult task than I initially thought, although the end results were quite satisfactory.  I took guidance from the official YOLO repository, which helped make the process smoother

Here’s a mosaic of detections from the validation set

Since the model is a detection model, I don’t yet have a classification accuracy value yet, but visually the detections look good. Classification would be completed via checking for the presence of acne within a patch, and if it’s there (and with a strong enough confidence), then classifying the patch as having acne. This also gives the ability to have a richer value than just a binary output for acne classification.

The wrinkle detection model is in progress leading likely the sunburn model to be the last one completed.

Team Status Report for 10/18

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Our current biggest risk is that the parts that we have ordered did not arrive. We have been stuck in this position for a while, so while the software has been making progress, our hardware aspect has not yet started yet. We checked in with our TA for our orders. In order to mitigate this situation, we have looked through a lot of resources and planned an even more thorough implementation for our hardware. Ideally, we’ll be able to bring up and connect them to the software and work as intended without much challenges in between. Additionally, another biggest risk is with our actual structure of the mirror. To account for our inexperience in this area, we have put in an order for a larger quantity of woodsheets so we have room for error. 

As for software, the biggest risk is still the performance of the remaining two models. The acne detection model is working well on the dataset. Likely, the largest risk is within the sunburn detection model. Increased data gain via scraping detailed in the report is likely the strongest risk management. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

While making concrete plans for our design report, we have decided to add more buttons. This was necessary as our LCD will sit behind and mirror and will not have a touchable screen. These added buttons will help the user navigate our Magic MIrror App. Because buttons are cheap and are easily workable through GPIO, we don’t expect much added challenge. In the case where the user studies feedback details that buttons are hard to use, we may consider installing the LCD in front of the mirror (touchable screen). 

Provide an updated schedule if changes have occurred.

We changed our schedule on the implementation and testing side – our orders are coming in a bit late, and we realized that we need more time to integrate all the software/hardware components. We decided to shorten the user study by a week (we think this is reasonable since getting approved by the IRB will also take time).

link to our schedule 

Part A: written by Isaiah

Our project is designed around analyzing skin, and suggesting products to help improve skin health. There are multiple components to the project that require sensitivity to global considerations for the resulting product to be truly effective and accessible for a wide range of people. Beyond just skin tone, different environmental conditions such as average temperature and humidity of a region can shift. As an extreme case, the level of skin moisture that’s common and healthy in tropical regions might not be the same in mountainous or subarctic climates. Typical tips and tricks used to identify skin conditions that are popular in one climate or for one group of people might not be reliable for another. Our product allows for an algorithmic solution for analysing skin that’s trained and tested on a large variety of skin types, allowing for accurate analysis with a greater trend towards global invariance. Furthermore, in detailing generic products and product combinations, users can make use of the product recommendations, where specific brands might not be available in some regions of the world.

Part B: written by Corin

For any technology that requires personal data collection, many cultures, including the American culture, take privacy and trust very seriously. Privacy is often tied to personal rights, and there’s a cultural implication that personal information should be protected (laws for medical privacy, student records, etc. exemplify that). People generally expect transparency when it comes to how their information is collected, used, and shared. Mirror Mirror on the Wall focuses a lot on this privacy aspect, ensuring that all image processing is done on a raspberry pi locally. We intentionally made this design choice so that our user can trust that his/her data will be kept private and can comfortably use a product that collects sensitive personal data. 

Part C: written by Siena

Mirror Mirror on the Wall’s initiative is primarily focused on analysis of skin care and has no direct ties with environmental factors. However, we have considered environmental impacts in our design to offer sustainable use of resources. Through the use of a Raspberry Pi 5, an energy-efficient embedded system, and making all machine learning inference locally inside the device instead of relying on cloud servers, our system circumvents network-based carbon emissions. Although the physical components of the mirror (LCD, camera, case, etc) must have electronic materials as part of them, our design encourages durability, modularity, and reuse, such that individual components are repairable or replaceable without scrapping the entire system. While the system never has a direct interaction with natural ecosystems or biological systems, its impact on the environment is reduced indirectly through employing sustainable hardware options and power conservation. 

 

Corin’s Status Report for 10/18

What did you personally accomplish this week on the project?

This week, we had to focus a lot on the design report. I wrote the abstract, introduction, user-case requirements, design requirements, and the hardware side of the design trade studies. I realized that we didn’t include design trade studies in our design presentation, and included tables in the design report to visualize our reasonings behind our component choices. 

For implementation, this week has been the slowest. I am still waiting on my LCD display to connect it to the RPI. Although I did some research on the app and recommender system last week, I realized that there needs to be more planning on the software side to really integrate the button -> camera -> ml model -> recommender system -> app. Therefore, I changed directions to set up the Raspberry pi and learn the different libraries for an easier integration.

Is your progress on schedule or behind?

My progress is behind. I am planning to ramp up the pace after break. The software integration needs more planning, and I will discuss with Siena and Isaiah after break to combine the software sections.

What deliverables do you hope to complete in the next week?

While planning and gradually combining the software side, I’m hoping that the rpi will be set up with basic components connected (button/display). Hopefully I can test the basic connections with a mock app or just python code. 



Siena’s Status Report for 10/18

Accomplishments
At the start of the week, our team divided the design report into sections. My part was specifically on system architecture, implementation, project management, risk mitigation, and summary (please refer to the corresponding sections of our design report). While we had big ideas in place, I realized that we needed more concrete plans when actually writing the report. I thus refined our hardware architecture/implementation diagrams, defined how our button controls were going to work, and detailed the hardware-software as well as peripheral interface. In addition, by following the CAD model, I added more products to our parts list.

Our parts didn’t arrive yet, but I started working with the RPi camera module for now. I discussed with Isaiah on the methods we want to use to pass the frames to the software side (function call, direct software management, etc).

I also reached out to the IRB office, explaining our plans for our user study and what processes we need to go through in order to get it approved. The email is included below.

Schedule
Unfortunately, we are currently behind schedule. Our parts are not here, and this has been our bottleneck since the last week. We are discussing plans to cut down the time reserved for our user study from 2 weeks to 1. This way, we’ll have another week to actually work on our product.

Deliverables
Hopefully by next week, our parts would be here. My goal would be to have the camera working and feeding in correctly to our Magic Mirror app.

Siena’s Status Report for 10/4

Accomplishments

This week, I worked on building our parts list from our discussion based off of our design review slides from last week. Our group had already ordered a RPi 5 from the inventory, so we ordered additional components like our camera, LCD, LEDs, and more. Aside from parts, I was messing around with the RPi we picked up. Although there wasn’t much that I could do without the rest of our parts, I did extensive research as to how we will integrate the camera and buttons. Here are the main resources for setup, and I have also created an initial code based off of them to take a picture when a button is pressed. It has not been tested yet, but I will do that when everything gets here. 

Parts Status:

Resources for buttons for control: https://gpiozero.readthedocs.io/en/stable/recipes.html#button

Resources for camera setup: https://docs.arducam.com/Raspberry-Pi-Camera/Native-camera/16MP-IMX519/#products-list 

Schedule

For week 6, we are on schedule. We have ordered all the parts that we need in order to proceed with our projects, and I have looked into different ways that I can bring up the camera and button, as well as how they communicate with each other and integrate through the RPi.

Next Week

In the next week, I think our group’s greatest concern is finishing up our design review documents. On a personal level, I wish to be able to bring up the camera and control it with buttons when the parts get here.



Team Status Report for 10/4

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risks haven’t changed since last week: it is still making sure our user design is intuitive, and our models are accurate.  We developed our physical design a bit more this week, and we discussed what the general flow of the Magic Mirror app should be to ensure ease of use.

In terms of hardware risk, the integration between the new IMX519 camera and the Raspberry Pi 5 introduces potential driver and compatibility challenges. To manage this, we’ve already researched and documented how to bring up the camera using the new picamera2 and libcamera stack. If hardware communication issues arise, our fallback plan is to test using a standard Pi camera first to validate the GPIO and control flow before reintroducing the IMX519. This ensures we can continue development on schedule even if certain hardware components take longer to configure.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward

One of the only significant changes was adding more buttons to the system for better system control/user experience. We initially decided on having only one button, but as we planned out the app, we realized that navigating it with only one button would be confusing and inconvenient. We have many open GPIO pins on the RPi, and the buttons are still cheap, so this would mainly just affect our physical structure design.

  • Provide an updated schedule if changes have occurred.

No changes to the schedule as of now.