Isaiah Weekes Status Report for 10/25

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficienteffort into the project over the course of the week (12+ hours)

A segment of this week was spent on the ethics assignment, but I also put time into training and developing the wrinkle model. This model generates a mask/set of wrinkles, which then can be thresholded to determine skin condition/wrinkle count.

Face image

Wrinkles on face

Testing for a proper threshold for wrinkle count/density will be needed to ensure accurate analysis.

One thing to note is that we are a little off schedule, as we planned to have all four models done, and only three are. This is all still before the building of the mirror, so it’s not acting as a bottleneck, but the sunburn model will need to be done by the end of next week to ensure the project can move forward smoothly.

Team Status Report for 10/18

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Our current biggest risk is that the parts that we have ordered did not arrive. We have been stuck in this position for a while, so while the software has been making progress, our hardware aspect has not yet started yet. We checked in with our TA for our orders. In order to mitigate this situation, we have looked through a lot of resources and planned an even more thorough implementation for our hardware. Ideally, we’ll be able to bring up and connect them to the software and work as intended without much challenges in between. Additionally, another biggest risk is with our actual structure of the mirror. To account for our inexperience in this area, we have put in an order for a larger quantity of woodsheets so we have room for error. 

As for software, the biggest risk is still the performance of the remaining two models. The acne detection model is working well on the dataset. Likely, the largest risk is within the sunburn detection model. Increased data gain via scraping detailed in the report is likely the strongest risk management. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

While making concrete plans for our design report, we have decided to add more buttons. This was necessary as our LCD will sit behind and mirror and will not have a touchable screen. These added buttons will help the user navigate our Magic MIrror App. Because buttons are cheap and are easily workable through GPIO, we don’t expect much added challenge. In the case where the user studies feedback details that buttons are hard to use, we may consider installing the LCD in front of the mirror (touchable screen). 

Provide an updated schedule if changes have occurred.

We changed our schedule on the implementation and testing side – our orders are coming in a bit late, and we realized that we need more time to integrate all the software/hardware components. We decided to shorten the user study by a week (we think this is reasonable since getting approved by the IRB will also take time).

link to our schedule 

Part A: written by Isaiah

Our project is designed around analyzing skin, and suggesting products to help improve skin health. There are multiple components to the project that require sensitivity to global considerations for the resulting product to be truly effective and accessible for a wide range of people. Beyond just skin tone, different environmental conditions such as average temperature and humidity of a region can shift. As an extreme case, the level of skin moisture that’s common and healthy in tropical regions might not be the same in mountainous or subarctic climates. Typical tips and tricks used to identify skin conditions that are popular in one climate or for one group of people might not be reliable for another. Our product allows for an algorithmic solution for analysing skin that’s trained and tested on a large variety of skin types, allowing for accurate analysis with a greater trend towards global invariance. Furthermore, in detailing generic products and product combinations, users can make use of the product recommendations, where specific brands might not be available in some regions of the world.

Part B: written by Corin

For any technology that requires personal data collection, many cultures, including the American culture, take privacy and trust very seriously. Privacy is often tied to personal rights, and there’s a cultural implication that personal information should be protected (laws for medical privacy, student records, etc. exemplify that). People generally expect transparency when it comes to how their information is collected, used, and shared. Mirror Mirror on the Wall focuses a lot on this privacy aspect, ensuring that all image processing is done on a raspberry pi locally. We intentionally made this design choice so that our user can trust that his/her data will be kept private and can comfortably use a product that collects sensitive personal data. 

Part C: written by Siena

Mirror Mirror on the Wall’s initiative is primarily focused on analysis of skin care and has no direct ties with environmental factors. However, we have considered environmental impacts in our design to offer sustainable use of resources. Through the use of a Raspberry Pi 5, an energy-efficient embedded system, and making all machine learning inference locally inside the device instead of relying on cloud servers, our system circumvents network-based carbon emissions. Although the physical components of the mirror (LCD, camera, case, etc) must have electronic materials as part of them, our design encourages durability, modularity, and reuse, such that individual components are repairable or replaceable without scrapping the entire system. While the system never has a direct interaction with natural ecosystems or biological systems, its impact on the environment is reduced indirectly through employing sustainable hardware options and power conservation. 

 

Team Status Report for 10/4

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risks haven’t changed since last week: it is still making sure our user design is intuitive, and our models are accurate.  We developed our physical design a bit more this week, and we discussed what the general flow of the Magic Mirror app should be to ensure ease of use.

In terms of hardware risk, the integration between the new IMX519 camera and the Raspberry Pi 5 introduces potential driver and compatibility challenges. To manage this, we’ve already researched and documented how to bring up the camera using the new picamera2 and libcamera stack. If hardware communication issues arise, our fallback plan is to test using a standard Pi camera first to validate the GPIO and control flow before reintroducing the IMX519. This ensures we can continue development on schedule even if certain hardware components take longer to configure.

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward

One of the only significant changes was adding more buttons to the system for better system control/user experience. We initially decided on having only one button, but as we planned out the app, we realized that navigating it with only one button would be confusing and inconvenient. We have many open GPIO pins on the RPi, and the buttons are still cheap, so this would mainly just affect our physical structure design.

  • Provide an updated schedule if changes have occurred.

No changes to the schedule as of now.

Isaiah’s Weekly Status Report for 10/4

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week

I went back and updated the oily model as described in last week’s report, and I had found successes. The model had learned and performs well on the test dataset. The model achieved at best 87% test accuracy, which is satisfactory. Also, the validation was done with noise added, so the true accuracy might be slightly higher. As such, I moved on to the acne detection. At first, I tried a similar method, but didn’t find much success. I then decided to do more research, and found that both my dataset, and many existing methods do acne detection (generating bounding boxes for acne). So, I decided to start developing an acne detection model, and just the presence or absence of acne to classify any given face. That is in the works as I write this.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?What deliverables do you hope to complete in the next week?

Progress is on schedule, I have 1 model done, likely the hardest, and another model that will be done soon. This leaves next week and fall break for training the last two models. Wrinkles will likely be tackled in a similar manor to acne as a detection problem.

Team Status Report for 9/27

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

This week, we worked mostly on coming up with detailed hardware and software designs. As we were integrating the two and discussing with our mentors, we improved the design so that it is more fit for the purpose and target audience of our product. For example, we decided to leave out the accelerator and see if we meet our use case requirements first. There is always a possibility that we might not reach that and have us adjust the design to accommodate its usage. 

Additionally, we all lack experience in actually building physical products. We expect difficulties during this process, so we made a CAD for general guidelines and solidified the design and dimensions. 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We decided on the board for our product: RPi 5 8GB. Although we thought about including an additional accelerator, we realized that our product does not require a heavy ML workload for us to need an accelerator. To keep our design cheap and simple, we decided to attempt our ML processing on just the RPi. 

Our group also decided to include an app (we’re thinking about an on system app for now). This app is displayed on our LCD display to show the current session’s skin analysis as well as data stored from past user sessions. 

Training on the whole image datasets proved difficult, with poor generalization. This causes a shift to training with close up skin-patched based methods. These have seen more success in published papers. This requires a higher resolution camera, however that is very much within specs. The newer models of Raspberry Pi-compatible cameras all have at least 12MP resolution, which is plenty for accurate skin patches. This does have the side effect of requiring the user to be closer to the mirror than previously projected, ~40cm. However this is still within standard desktop mirror distances. Fine-tuning was not in schedule for this week, so we’re still on track in that regards.

  • Provide an updated schedule if changes have occurred.

We’re on track as we finalized our parts list and compiled a detailed implementation plan with block diagrams. We have also planned out our communication protocols for our device interface. 



Team Status Report for 9/20

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk as of now is to ensure that we are able to deploy our model on the board we choose. For a raspberry pi + AI kit, using an external accelerator and connecting that to our main processor will be a tedious process, and from previous experience there is a lot of debugging involved. As for the Nvidia Jetson Nano, it will be much more convenient since the GPUs are included in the kit, but none of us have experience with using the board, so it’ll be a challenge to ensure that we can get the board up and running and have all the software stack set up. Another challenge is to ensure that all of our input/output data processing can be done. Once we decide on the camera and the LCD displays, writing the scripts for image processing (to feed into our model) and producing output data (analysis on each of the classes) can be challenging, since we do not have embedded experience with these boards. We are currently looking into 2 different boards with compatible cameras/lcd displays so that we have an alternative plan. However, because the parts are expensive, we want our first choice to work and are doing extensive research in open source CV projects that work with the boards we researched.

Another potential risk is the classification accuracy on the oiliness detection task. Unlike the other tracked factors, oiliness is much less visually clear. This could potentially result in a sub-85% classification error when trained only on the initial datasets. To counteract this, if model performance is below our expectations, we will gather new data points through web-scraping, and potentially new images we collect ourselves. Models to detect oiliness have been implemented prior, so likely gathering more data will drive down the error rates.

The datasets available for skin burns are either locked behind stock image distributors paywalls, or contain many images of much greater severity, and also don’t solely focus on the face. Potentially, this could cause a bias in our sunburn detection model to only classify significant burns. To counteract this, we will augment the existing dataset we were using with images of sunburns scraped from google images. Google image results for sunburns are much better aligned with the goals of our system, and match typical levels of sunburn people face in their day-to-day life.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We changed our computer vision model backbone from MobileNetV2 to MobileNetV3-small. MobileNetV3 is a more modern iteration, and has better performance on edge devices, while retaining similar classification accuracy (defined on imagenet top-1 and top-5 classification rate).

Upon further research, our team is now considering removing the two MCU units in our hardware block diagram. The goal of the project can be achieved with just a single computer board directly interfacing with our camera inputs and LCD outputs. This adjustment simplifies the architecture itself, but the complexity of the project remains as we are still interfacing with the same hardware components from the plan.

This design change, however, requires corresponding updates to our communication protocol and power supply to ensure compatibility with the connected devices. In addition, if we choose to work with a Raspberry Pi 5, we will have an external AI accelerator to meet our use-case requirement of displaying results on the LCD within the 7 seconds from the user being positioned correctly.

Provide an updated schedule if changes have occurred.

This week, we originally intended to finalize selecting the different hardware components for our project. However, upon research, we realized that a discussion with our advisors will be needed in order to make a decision and compile a parts list. This influenced our search for libraries as they will depend on which single board computer we select for our project. We will be catching up to our schedule until Wednesday of Week 5.