Siena’s Status Report for 11/8

Accomplishments

This week, we actually began integration. Our group members worked together most of the time, but I concentrated on connecting the camera inputs to the system’s app, showing the preview of the camera before taking the picture, saving the picture, and passing the picture taken to the ML models that Isaiah has made. There were a few hiccups in the environment setup and actually making it support both the camera and the ML model.  However, all were resolved. 

In addition, Corin and I laser cut the plywoods and made a mock setup for our interim demo. The both of us also refined the system app together for a more pleasant user interface. 

Schedule

We are currently on schedule!

Next Week

I hope to now integrate the recommender system, as now we are able to extract and display the outputs from our analysis. 

Corin’s Status Report for 11/1

Accomplishments

This week, all of our parts arrived on Thursday.  I connected the screen display to the RPI, so that we can see the mockup app on the display. I also connected two buttons(since we cannot touch the screen behind the mirror film), one to start the analysis and one to view the results of the analysis.  I continued working on the mock app, and included a view trend page that includes a chart based on a mock json file of the results of the analysis. I also checked that the screen was bright enough to be seen through our mirror film and that the user can also see themselves pretty well if we have a dark surface behind the mirror film. Siena and I wanted to work on combining the camera with our mirror app/button, such that when the button is pressed, the camera takes a picture and the user is notified that their analysis has begun. However, we had some problems working with the new camera, so we will have to continue working on it next week.

Schedule

I am mostly on schedule. Next week, I will need to work with both Isaiah and Siena to integrate the basics of the whole system for the demo. The goal is to at least have button –> camera snap —> model start —> model out —> results shown on the app.

Deliverables

The demo is next week, so our group wants to connect at least the functional parts together. We want a camera input, buttons to control the most basic start/view results, and an app to show our users our analysis.

Although the physical mirror is unavailable, we want the skeleton to be put together.

 

Team Status Report for 11/1

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

This week all of our parts arrived! We have progress in most of our subcomponents, but because our parts arrived on Thursday, we didn’t have too much time to integrate all of our subcomponents. As we transfer our parts one by one, we are expecting a lot of hurdles in putting everything together.  Siena was working with another camera before our actual camera arrived, and there was already a problem with the new camera that we didn’t encounter in our old camera. This is preventing us from connecting it to the rest of the hardware (buttons and the screen with the mock app).

We also need to start combining all of our mock setups with the actual data input/output from our camera and the ML model. During this process,  we will need a lot of communication/trial and error, since we expect various compatibility issues with our mock data formats and actual outputs from the camera and ML model.

Isaiah is done training the models for the majority of our classes. Since most of our subcomponents are done, we aim to mitigate the risks that comes with integration by working together next week to connect everything  – the camera inputs to the model and the model outputs to the display (app).

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes were made to our existing design requirements. We just received all of our parts, and after checking them out and testing on a few components, we think it’ll be sufficient with our existing design.

  • Provide an updated schedule if changes have occurred.

The schedule didn’t change except that the hardware buildup/testing is going for a week longer (until the mid-demo) because our parts arrived later than expected. everything else will remain the same.

link to schedule: https://docs.google.com/spreadsheets/d/1g4gA2RO7tzUqziKFuRLqA6cGWfeL0QYdg5ozW9hug74/edit?usp=sharing

Siena’s Status Report for 11/1

Accomplishments

Mid-week, we finally received our parts and could begin working on our project. However, I had a lot of problem setting up the new camera. I tried following the guidance on the website, watching youtube tutorials, reading through documentation, and more. However, nothing seemed to work. I spent most of the week debugging, but couldn’t get it working. I’m planning on asking the TAs on Monday.

Because I couldn’t make much progress with the camera, I helped Corin with bringing up the LCD. I mostly worked on the connection while Corin was working on our system app. Whilst working, we took a short video showing how the LCD would show through the two-way mirror. The “two-way” of the mirror worked well, but we are a little concerned with the “mirror” functionality as the color is a little bit distorted. 

     

Schedule

We are currently on schedule. 

Next Week

I hope to get the camera working early next week. Afterwards, I want to work on the actual integration with the ML model that Isaiah has been working on. I expect to be able to have some kind of analysis result by the end of next week.

Additionally, I hope to talk to one of the advisors to move forward in the user study approval process.

Isaiah’s Status Report for 11/1

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficienteffort into the project over the course of the week (12+ hours)

This week I focused on getting the last model ready. I ended up switching to also using a yolo head for sunburn, since it worked well for acne. Likewise, this will need to be fine-tuned to classify what confidence/density would be good for defining “sunburn” vs “no sunburn”. There’s empirical testing that can be done to our >85% accuracy standard, but also some qualitative tuning/testing would be needed, since there’s a gray area between the two binary categories.

Below is one image of the scraped images displayed with burn bounding boxes. The right is more emblematic of typical use cases, but it can detect a burn in both cases.