Status Reports

Team Status Report for 10/26

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

A risk that we have is the transmission of data from the Jetson to the server. Currently, when set up in the computer labs, the Jetson is connected to CMU Device and thus has no issues communicating. However, during presentations this will no longer be the case and the Jetson will be isolated. It could then become an issue to send classification data and images to the server to update on the frontend and database. We are considering a couple different solutions, usb/ethernet cables to connect to the server, hotspotting or simply connecting to CMU Secure if that is possible. We plan to test out some of these solutions next week to see if our embedded device can communicate without being plugged in and connected with CMU Device. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Yes, we decided against using AWS S3 for storing our images. This change was necessary because the pricing on S3 is variable to the amount of storage you use, so it became complicated for us to use our Capstone fund on it. We will be using one of our computers as the server for our images instead. This is slightly inconvenient as we will need the storage for the images, but should not cause any issues. Moving forward, if the amount of images and storage needed is more than we have available on our computers, we will look into different places to store them and revisit the pricing for storing images on S3.

Provide an updated schedule if changes have occurred.

Our schedule has not changed. 

This is also the place to put some photos of your progress or to brag about a component you got working.

Photos of our progress are located in our individual status reports.

Alanis’ Status Report for 10/26

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

This week, I made progress on the frontend by integrating my local android emulator with our frontend repo that Gabriella has been working on. I then changed our main page to look more like what was proposed design report. This code is located on a separate branch on our frontend repo because Gabriella is currently using the main page to display debugging content from the backend. I am using a placeholder image for now as I am still working on drawing the actual image that we will be using. 

I also worked on our ethics assignment and discussed the group questions with Gabriella and Riley during our weekly meeting. During this weekly meeting, I also helped Riley debug why the import of OpenCV wasn’t working with Python. We initially thought the installation of OpenCV was incorrect, but the reason ended up being that the Python path was incorrect so Python could not find the installed libraries. I then wrote some code which would take an image provided by OpenCV from the camera and input it into our 3 models to output a one-hot encoded array which denotes which class it belongs to. We may have to change this array later depending on exactly how we will communicate the classes to our backend, but this code holds the general structure of the edge-device classification and allows for later output processing. The code is commented out for now to avoid conflicting with Riley’s work. I also wrote the code for testing our models by using pictures from our own closets to test our models on new data. 

predicted_classes is a one-hot encoding of the possible classes. I will use this code with pictures of our clothes to calculate the accuracy of our models on pictures that we take. 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

My progress is on schedule.

What deliverables do you hope to complete in the next week?

I hope to complete the integration of our 3 classification models and the Jetson Xavier so that it is able to take an image with it’s attached camera, feed it into our model, and output a class for the clothing type, color, and usage which can then be transmitted to our backend. I also hope to continue the testing our classification models with pictures by utilizing clothes from our own closets.

Riley’s Status Report for 10/26

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

In summary, my progress this week was installing VS Code and connecting Github on the Xavier NX, connecting the camera to the Xavier NX, and taking a picture using the camera using opencv. A lot more work went into this than the simple description allows for. Firstly, in the process of installing VS Code on the embedded device, I followed the instructions from VS Code after downloading the arm64 version. I originally downloaded the amd64 version, but that obviously didn’t work on the Xavier NX which uses arm. After I downloaded the deb file and followed the instructions I still ran into issues with the window not opening after clicking on the icon or typing code into the terminal. I tried debugging for a little while, and none of the most available resources (https://askubuntu.com/questions/1410992/vscode-not-opening-on-arm64-ubuntu-20-04 and https://askubuntu.com/questions/1022923/cannot-open-visual-studio-code) were unable to help. Eventually I stumbled upon a website that recommended running code —no-sandbox, which worked. To the best of my knowledge this occurs because in certain linux or other environments, launching the Chromium sandbox is impossible, so we needed to disable the feature and lose some safety features.  

After that, I connected the camera to the Xavier NX, and followed these Arducam instructions, and it worked very well. I was able to set up, view and change the camera settings with the gui installed in the instructions.

After connecting the camera, the next step was to use opencv to take a picture from the camera. This was the most frustrating part to debug since despite the fact that opencv was downloaded, it kept saying the module was not installed. I tried to debug on my own for a little while, trying the typical google searches and websites but eventually I stalled out and pivoted to other work for a little while. Eventually Allie and Gabi were able to help debug and Allie found that we needed to put the python path to the modules in bashrc as it hadn’t been done. Once we followed this it worked and we were able to run openCV. This process took 3-4 hours of debugging and should have worked far earlier than in practice but my guess is that some part of our environment might have made it so the installation didn’t “stick” correctly until we put the path in bashrc.

The actual openCV script that we used to take a still image can be found in many different places but I found it from a youtube video. I had to make a small modification to make it work with our camera, but the rest of the code is from the first demo of the video. Below is a picture of the code, with a couple extraneous print statements, and an image that was taken by our camera. The image looks slightly blurry due to the time it took to process and an unsteady hand. The time taken is something that we might have to reduce further, however I think this should well be possible and it’s just something from opencv that made the shutter window longer than it needed to be. After this I connected our github with the Xavier NX and pushed the code. 

OpenCV Starter Code

Resulting Image

If you want higher quality images please slack me! For some reason the process of moving it into the website really lowering the quality. I have higher quality versions that can I can show if desired!

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

On Schedule

What deliverables do you hope to complete in the next week?

Next week I want to make the openCV continuously send images using a script. Once this is done with a low enough time interval to meet our timing constraints, I will then try to move the models to the Xavier NX and run some of these images through the model or a sample model to ensure that the formatting of the images is as expected on both sides.

Gabriella’s Status Report for 10/26

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I worked on my ethics assignment and different frontend components.

I reworked the frontend to use a stateful widget instead of a stateless widget after reading this article. I made this decision because we will want to save information about the generated outfit on our page.

I also added functionality to the generate outfit button on our frontend. Clicking this button now brings up a popup with a dropdown. I used this tutorial for adding the dropdown. Users can select their outfit type from this dropdown, and when they click submit it displays the generated outfit details on the main page. I used this tutorial for making http requests on Flutter. This image shows the generated outfit details, generated by our algorithm using this sample data in our database. Eventually, we will want to display this on a separate page where users can either continue generating outfits or select their outfit. Our application frontend is now linked to the backend and database.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

I am on schedule.

What deliverables do you hope to complete in the next week?

Next week, I hope to start displaying the clothing item images on the frontend. If I have extra time, I would like to start working on adding a new table for user preferences in our database, and possibly updating the current outfit generation algorithm to take this into account.

Team Status Report for 10/19

Status Report 4

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk at the moment is our clothing classification. We are managing this risk by developing a new way to classify clothing, using three different trained models. This should help us get the accuracy we need as each model will be able to focus on one aspect of the item. One model will classify usage, another will classify item type, and the third will classify color. If we are not able to achieve our design requirement classification accuracy, our contingency plan is to have the user confirm that the detected classifications are correct. If they are not, we will allow the user to correct them.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Yes, we updated the plan for our outfit generation algorithm to take into account the user’s preferences in addition to the clothing they have. This change is necessary because we want the outfits we generate to make sense for the occasion, and the user’s style. This change means that we will need to add a new table in our database, but it should not cause issues time-wise as we already allocated time to update our outfit generation algorithm. Going forward, we will make sure to complete the changes to our outfit generation algorithm for using preferences in the time we have set aside.

We also updated the way that we are classifying clothing. We will be using 3 models instead of one. This change was necessary because we were not getting the accuracy levels for classification that we wanted with our previous design. This change will not affect our latency requirement or our Jetson storage requirement since 3 individual ResNet50s are smaller in size than the original model we had and running classification on a ResNet50 on a lower-spec model of the Jetson takes around 1 second, meaning classifying 3 times will take around 3 seconds. This is discussed in more detail in our design report. Moving forward, we will not move our application to be hosted on the Jetson so that the device can use all its power and have enough storage for the three models.

Provide an updated schedule if changes have occurred.

Our schedule has not changed.

This is also the place to put some photos of your progress or to brag about a component you got working.

Photos of our progress are located in our individual status reports.

Additional Question:

Part A: Written By Alanis

Our product solution will encourage people not to purchase fast fashion products by recommending stylish and weather/occasion-appropriate outfits that already exist in their own closets. Style Sprout allows users to make the most of their existing wardrobe. While this makes getting dressed in the morning easier, it also reduces the need to purchase new, fast-fashion items, minimizing textile waste and resource consumption. As mentioned in part C, the environmental benefits of discouraging people from fast fashion help the globe as the earth’s health and global warming affect everyone. Furthermore, fast fashion companies often exploit workers all around the world since producing cheap clothing can mean underpaying your workers or unsafe factory conditions. Fast fashion leads to many unethical labor practices, as mentioned in this article. Style Sprout encourages users to confidently reuse and style their current clothes, and this push away from fast fashion will also remove support for unethical fast fashion companies who exploit workers in countries all over the world, like Bangladesh, India, and China.

Part B: Written by Riley

Our product solution touches upon many cultural factors due to clothing and attire being an insoluble fragment of culture. Clothing is deeply tied to the representation of culture, tradition, and identity,  making it necessary for us to consider how our solution might affect these factors. Another intractable truth is that clothing is also inherently personal. It is near impossible for an algorithm to summarize and tailor suggestions to all users’ unique styles. As such our solution attempts to touch upon this point by disregarding style as a parameter. Predictions will first only be given with combinations that include covering for the chest and the legs, but will not distinguish between styles outside basic color matching. As such we intend to allow users to maintain their individuality and allow them to express themselves in their cultural values. Only after the user selects clothing would we attempt to partially include their previous selections into future recommendations.

Another cultural implication of our project solution is the western clothing bias inherent in our system. Even if we disregard our personal biases, the types of clothing included, the valid color schemes we include, along with what is classified as “formal”, and “casual” are all rooted in Western (specifically American) culture. As such it is important for us to take this into account for our solution that it is not emblematic of all cultures and all peoples but inherently limited in its usability for certain peoples. This is an issue that is not easily solvable without a massive undertaking but nonetheless is vitally important to consider and disclaim.

Part C: Written by Gabriella

Our product solution addresses the environmental impact of fast fashion. Fast fashion is hugely detrimental to the environment as it leads to increased waste and over consumption. Style Sprout encourages users to make better use of their existing wardrobe, reducing the need for people to purchase new clothing. Through wardrobe management and outfit generation, our app aims to help users make a more intentional use of their wardrobe.

We also hope to help decrease the pressure to participate in fast fashion trends by generating stylish outfits for our users that align both with convenience and environmental consciousness.

Riley’s Status Report for 10/19

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

The past two weeks, since the last status report I have primarily accomplished two things. The first, and more time consuming of which was the writing, editing and integration of the design report. This took a good amount of time to write, even though we had gone over all the information in various forms. Expanding on said information and collating it in a report format was not a trivial task for our team. Past that, this week over break I have received news that the camera has arrived, and in response I have begun the preparation to understand what format the camera will give and how to connect it while getting information to and from the Xavier NX. (might need to do research here). Specifically, I have looked at several OpenCV tutorials, and determined which image format the video will arrive in. With that information, it will be possible to lower the learning curve that comes with implementing and integrating the camera. Additionally, I have downloaded the frontend repo and have gotten acquainted with the structure.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

My progress is on schedule, the hardware components took some time to arrive but not so much that there is any concern yet. Additionally. I have been able to do some work to speed up this integration as well in preparation. 

What deliverables do you hope to complete in the next week?

Next week I intend to receive the camera and detect images from it and store them in an openCV format, all on the Xavier NX. Additionally, if time allows I intend to download our model or a test model onto the Xavier NX, attempt to run it and send information to our API or simply collate it into a format to do just that without actually sending it depending on my and my teammates progress.

Alanis’ Status Report for 10/19

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

Over the past two weeks, I made a large pivot in the way that I addressed model architecture and training. Our model was insufficiently accurate and I believe it was due to three things. First, I tried to train one ResNet50 model to classify three things (clothing type, color, and usage), which was difficult to get high accuracy on because this is a very complicated task. Furthermore, I treated the task as multilabel classification, where each of the labels(each clothing type, color, and usage) is either true or false. This is partially incorrect because each item only belongs to one clothing type, base color, and usage category. Also, I was adding additional layers onto the pretrained ResNet50, which was harming the training because I was adding layers of random weights to pre-calibrated weights of the ResNet50. I decided to pivot to having 3 individual ResNet50 models classify for each category. Based on our research, which is discussed in more detail in our design report, having 3 ResNet50 models is within our range for total model size and classification time. I then saved each model onto my computer so I can later upload them to the Jetson. 

I then changed my code to train 3 different models, which is located on our Github. I then downloaded the frontend repo, which is located here, and began familiarizing myself with the structure to start helping Gabriella with the frontend. 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

My progress is on schedule.

What deliverables do you hope to complete in the next week?

I hope to help Riley figure out using the OpenCV library on the Jetson Nano so we can start integrating our models in. I also hope to help Gabriella make progress on the frontend.

Gabriella’s Status Report for 10/19

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I mainly worked on the Design Report with my team.

Aside from this, I set up a github for the frontend, to keep our application separate from the backend. I also pushed and wrote code for the barebones frontend UI. Here is the demo image.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

I am on schedule.

What deliverables do you hope to complete in the next week?

I hope to begin connecting the frontend to the backend and to AWS S3.

Alanis’ Status Report for 10/5

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

 I tried two different approaches to training a model. The first was to use a four-layer convolutional neural network architecture, which was able to achieve a training accuracy of just below 70%. The code for this is located here. I then pivoted to using the ResNet50 pretrained model to determine which one would give me the best accuracy. 

I was able to achieve a training accuracy of just under 80% for clothing type and color classification, which is what our use-case requirements outlined, so we will be proceeding with the ResNet model. The training is almost complete—certain clothing categories, like blazers and sweatpants, have <50 images while other categories, like tops, have >500. I need to even this out to improve the accuracy of our model by ensuring the training data is balanced. This requires me to manually label images of blazers/sweatpants with the color since I was unable to find any datasets online that have the clothing type images we need labelled with the color. My goal is to get >100 images of each clothing type into the dataset which requires more images of blazers and sweatpants. I think this will get our testing accuracy to 80%.

When training, I realized that the model was able to achieve high accuracy for clothing type and color classification but really struggled to classify the usage of clothing (formal, casual, sports). I realized that this is due to the difference in classifying clothing type/color and something more subjective like usage. Machine learning models can recognize patterns in pixel values, which help classify color, or patterns in shape, which help classify clothing type, but usage is much more subjective and has less to do with the quantitative value of an image (the array of pixel values that the model will process). We have decided to pivot to attempting to classify the usage of clothing but having the user validate our classification(and provide the correct usage if ours is correct (subject to change after discussing during capstone lab). 

I also realized that creating predefined outfits isn’t necessary because our SQL queries handle the outfit generation and don’t need to be based on predefined clothing type combinations. Since this removes one of my tasks, our team decided that I could work on the frontend instead since Gabriella has mostly been focusing on the backend and our classification model is done. 

I also worked on peer reviews and our design report this week. 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

My progress is on schedule.

What deliverables do you hope to complete in the next week?

I hope to finish the design report. I am working on increasing the size of our dataset and believe I can finish it and the training by early next week.

Riley’s Status Report for 10/5

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

This week I managed to boot up and get situated with the Xavier NX. I received the SD cards late last week so this week I was able to go to a computer lab and set up the Xavier. Before that though, I needed to flash the SD cards with the correct image to enable the Xavier to boot up. The current version of Jetpack (the NVIDIA embedded development environment) currently does not support the Xavier NX, so I found a previous version that would. I also downloaded an even earlier version in case the above one didn’t work, but it was thankfully unneeded. After that I followed the instructions for windows on the Getting Started Guide and downloaded SD Card Formatter and Etcher, and used my roommates SD Card reader to flash the microSD card. That went well, and then I went to the computer lab to boot up the embedded system. This process also went smoothly and it booted up with minimal issues. Finally, I set the settings, and installed openCV using pip on the Xavier. 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

My progress is now on schedule, thankfully the installation process was smooth so even though some of the resources took longer to come in, it went well enough to catch up.

What deliverables do you hope to complete in the next week?

Hopefully the Arducam camera will come in soon and I can start connecting the output of the camera into the Xavier NX but if that doesn’t come in then I will start working on the openCV code to process the camera output and try to make the Xavier NX talk to the database.