Status Reports

Riley’s Status Report for 12/7

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I assisted Allie in debugging the color model when running tflite. I wrote a script that classified 80 subsequent images and easily collated the results to view to quickly determine if any improvements were made with one model over the other. Unfortunately, we have not yet found a color model that functions properly on the Jetson, but the script was invaluable for testing and provided much of the timing testing data seen in the group status report. Additionally, I attempted testing with saved models and keras models, but the machine ran out of memory or simply crashed when that was attempted. 

 

As an alternative, I attempted to write a color detection algorithm based off the dominant objects, and I managed to detect the outlines of the clothing, but the module I used didn’t consistently grab the whole piece of clothing, but often only grabbed parts of it and parts of the background, making the average inaccurate. I was unable to find time to properly debug this this week with the chaos of the final week of classes, but we are confident that there is an alternative method available for us if the model remains stubborn in its unwillingness to function with the Jetson’s version of tensorflow. 

 

I also implemented a pushbutton and learnt about GPIO for the Jetson using their in-built Python GPIO API Jetson.GPIO. This took longer than expected since I had to learn that the Jetson only accepts 3.3V input for its pins and does not consistently detect a rising/falling edge if from above that value. I also had to determine that since internal pull up resistors are not supported by our edge device, I would have to implement one and add it to the small circuit. Once I made those adjustments, Falling edges were detected consistently and I was able to integrate it to be the trigger for a picture. 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

My progress is on schedule.

What deliverables do you hope to complete in the next week?

I hope to help implement the color detection algorithm/test models to check for improvement. 

I also hope to make a stand/holder for the camera to allow for more intuitive use of the camera. 

Along with those items, I intend to make the pushbutton circuit smaller, explore options to miniaturize the circuit, and complete our final week of tasks. 

Team Status Report for 12/7

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risks that could jeopardize the success of the project is our color model. The original and quantized color model performs fine on local machines, however the quantized color model performs very poorly on the Jetson. We are still unsure of the specific reason for this since the quantized clothing type and usage models work fine on the Jetson with negligible accuracy losses. We are managing this risk by trying to find the root cause of this issue to make the color model work. If this fails, we will pivot to a new technique which is using the pixel values of the clothing to determine the color. This requires object detection to crop the input photo to just the clothing to prevent extraneous parts of the image affecting the pixel values we evaluate, which has been implemented. The rest of the work just involves predicting a color from the pixel values.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes were made.

Provide an updated schedule if changes have occurred.

Our schedule has not changed.

This is also the place to put some photos of your progress or to brag about a component you got working.

Photos of our progress are located in our individual status reports. 

List all unit tests and overall system test carried out for experimentation of the system. List any findings and design changes made from your analysis of test results and other data obtained from the experimentation.

Classification Tests

For the classification unit tests, the non-quantized models were evaluated using our validation and sanity check datasets. This was done with 3 different trained models for each classification category(ResNet50, ResNet101, ResNet152) to determine which architecture performed the best. Specific graphs for this are located in Alanis’ individual status report, but the best performing architectures were ResNet50 for all the classification models. The validation datasets are 20% of the total dataset (the other 80% is used for training) and produced a 63% clothing type accuracy, 80% color accuracy, and 84% usage accuracy with the ResNet50. The sanity check dataset, which is made up of 80 images that represent the types of images taken by Style Sprout users(laid/hung/worn on a single color/pattern background, good lighting, centered) produced a 60% clothing type accuracy, 65% color accuracy, and 75% usage accuracy with the ResNet50. 

 

We also ran some other tests to determine how lighting and orientation affect the accuracy of the model. This included determining how image brightness, x and y translations, and rotations affect the model accuracies. Graphs and results are located in Alanis’ individual status report. 

 

The findings from the classification unit tests helped us determine the best model architectures(ResNet50) to use. Additionally, since the accuracies on our sanity check dataset did not meet our use case requirements, we changed our design to include a closet inventory page so users could change any incorrect labels to mitigate the low classification accuracy. 

 

S3 Upload Time Tests

For the S3 upload time tests. We uploaded 80 images to s3 and recorded a timestamp before the upload and took another timestamp after the upload. We took the upload time to be the difference between these two timestamps and the results were overall positive. The upload time was in between 0.15 and 0.40 seconds for the 80 uploads we tested, and the consistency and speed of the upload was welcome news as it allowed us to more reliably ensure our timing requirements for classification. From these test results we determined that no major design changes needed to be made to accommodate S3 image storing.

Database Upload Time Tests

The Database Upload Time was tested much in the same way that the S3 Upload time was tested, by sending 80 images through the protocol and using timestamps to record timings. The results of this test gave an average of 0.09 seconds for database upload, which is well within our use case requirements and doesn’t necessitate a change in our design. There is a slight change between these values and the values I presented. I believe that this is just due to the difference in sending to a server hosted on the same computer and a server hosting on an adjacent computer on the same network. Either way the values fit well within our use case requirements.

Push Button Availability test

Push button availability was tested by pressing the push button 20 times and ensuring that for each of the presses, exactly one image would be taken by the camera. This test passed and we ensured that the pushbutton was successfully integrated into the existing structure. 

 

Jetson Model Accuracy Test

The accuracy of the models were tested on the Jetson with the same dataset that Alanis used as the “sanity check” dataset. On these 80 images we ran the models on the Jetson and got various accuracies and timings. The results we got we in line with the models run by Alanis on her version on tensorflow with the exception of color. We got a type accuracy of 65%,  and a usage accuracy of 67.5%. However, when testing the color on the Jetson using the “sanity check” dataset we got an accuracy of 16.25%. When I did manual testing I believe that it was biased towards colors that the model classified better but when we used a dataset that included a larger selection and more even distribution of colors it became clear that the accuracy for color was very low. This necessitated many attempted changed to the model, using different resnet models, saved_models and other methods but the accuracy either remained untouched or it was unfit for our use cases. 

Because of this issue, we are looking into a pixel based classification algorithm that would find the average color of a piece of clothing and classify it. We hope that with this method our timing is still acceptable and the accuracy is higher. 

We also obtained data relating to the time that it currently takes to classify each article of clothing. This is only one section of the scanning, as we still need to send to s3 and upload to the database but as we saw with the timings above, even with those timings we are below our use case requirement of 3s (average of 1.69s). (I want to mention that there is a small delay in the process not included in these timings, which is a 1.5s delay that displays the picture that the users took and sent. This is to allow the user to correct any errors that might have occurred for the subsequent pictures).

With this timing data, the timing data for s3, and the timing data for uploading it to the database we can get the average time for the total process not including delays for the user to prepare their clothing. And we determined that the average time was 2.03s for the entire process, which is well below our use case requirements of 3s.

Outfit Generation Speed Test (Backend/Frontend integration test)

The outfit generation speed test was done by timing how long it took from pressing “Generate Outfit” to the outfit being displayed on the app. We did 25 trials. The longest amount of time it took for generation was 1.7s and the average was 0.904s, both of these are below our use case requirement of 2s.

Frontend/Backend Tests

We tested the functionality and safety of our settings popup page by ensuring that when users update the setting for how many times a piece of clothing can be used before dirty, and their location setting the new values they provide are validated. Validation is done both on the frontend and the backend for the uses before dirty. Validation for location is only done on the backend, and it works by calling the Open Weather Map API to see if the location exists. If either input is flagged as invalid in the backend, we send an exception to the frontend, which then displays an error message to the user to alert them that their inputs were invalid.  Both inputted fields must be valid for the update to go through.

 

We tested the functionality of our generate outfit page by ensuring that all outfits are generated “correctly”, clicking the generate outfit button causes new outfits to appear, clicking the dislike button updates the disliked outfit table in the database and causes a new outfit to appear, and clicking the select outfit button updates the number of uses for all items in the outfit, updates the user preferences table in the database, and brings the user back to the home page. If an outfit is generated, but there is not enough clothing available to generate one then a message will be displayed to the user that they need to scan in more clothing or do laundry to generate an outfit for this request.

 

A correctly generated outfit is defined below:

  • Cold Locations: Includes 1 jacket (if the user has an available jacket). May include a sweater, cardigan, etc (if the user has them available, with some chance). Always has a top/bottom outfit OR one-piece outfit (no shorts or tanks).
  • Neutral Locations: Excludes jackets but may include sweaters, cardigans, etc (if the user has them, with some chance). A top/bottom outfit OR one-piece outfit is also always returned.
  • Hot Locations: A top/bottom outfit OR one-piece outfit is always returned. No sweaters, jackets, hoodies, etc are ever returned.
  • All clothing items generated in outfits must match usage type (casual/formal) and be clean.

 

We tested the functionality of the privacy notice feature by ensuring that users who have not accepted the notice will have it on their page and can only use the app’s features after accepting it. We tested that accepting the notice is saved for the user. We also tested to ensure that an error message pops up to alert the user if they try to bypass the notice without accepting it. 

 

We tested the functionality of the laundry feature by ensuring that selecting the “do laundry” button causes all items to be marked as clean on the database. We also tested that when an outfit is generated and selected, that all items in the outfit have their number of uses increased in the database. If this number is higher than the user set value for “number of uses until dirty” this should update the item to be dirty. We also validated that dirty items are not part of generated outfits.

 

We tested the functionality of our closet inventory page by ensuring that no matter the number of clothing items in the database, at most 6 will appear on each page. We also tested that each clothing item in the database will appear on exactly one of the closet pages. This was done with a 0, 3, 6, 18, and 20 images/entries in the database. We also tested the scrolling functionality by ensuring that users could not scroll before the first page and could not scroll to the next inventory page if there were no more clothes to show. We also tested the popup to change labels by ensuring that if the GET request to get the current labels for an image failed, we would display a relevant error message if we were unable to connect to the backend or if there was an HTTP error in the response form the backend. We did this by making sure that the server was not running so that the frontend could not connect to it and by sending various HTTP error codes instead of the requested labels from the backend to the frontend. We also checked that when the user presses submit after changing the labels, a POST request with the changed labels was sent to the backend and the relevant fields in our database were updated. This was done by changing the labels for different clothes on each page and ensuring that our database was updated with the new labels provided.

 

We tested the user-friendliness of our app by giving 3 people a background on what our app does and how to use it. We then gave them the app and hardware to use Style Sprout and timed how long it took them to use different features. We also had them rate the app out of 10 based on intuitiveness and functionality. We defined intuitiveness as how easy it was to understand the features, and functionality as how easy it was to actually use them.

For the results of these tests, we met our intuitiveness requirements with an average 9/10. Time-wise we also passed the tests, with all features being doable within 10 seconds. However, we did not meet our functionality requirements and got an average of 5/10. Users explained that they struggled specifically with taking images with the camera, and struggled with centering their clothing in the images, leading to a decreased accuracy for classification. As a result, we are adding both a button and a camera stand to our final product to make this process easier for users. After adding these features, we hope to run this test again.

Gabriella’s Status Report for 12/7

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I began working on testing. I did many trials to test the outfit generation speed, as well as functionality of all the features I worked on. I also met up with my team to do some integration testing of the entire system.

I added the disliking outfit feature that was suggested during our interim demo. Here is a picture of the new dislike button. This button stores the information of the outfit and saves the outfit to a table called outfit dislikes in the database. This table saves the ids for both the top and bottom of the outfit (or just the top if the outfit is a dress) as well as the number of times the outfit has been disliked. The number of dislikes that each outfit gets, as well as user preferences are both incorporated into the outfits that are generated for users.

I also added a delete button in our item editing drop down in the closet page. I thought it would help to make our application more robust, if users choose to get rid of certain items or if they mistakenly added a bad image and wanted to take a new one. This deletes the item from the database and s3.

I added a privacy notice table in the database as well as a privacy notice on the front end. The table keeps track of whether the user has accepted the notice or not. The privacy notice will appear in the app until users accept it. If the user tries to submit without accepting it an error will appear, so that users cannot use the app without acknowledging the notice.

I also began working on the poster assignment. I added the Product Pitch sections, and added some images of the application using the mockupphone website to make the screenshots look more professional.

For the final report, I wrote my first draft of pseudocode for the outfit generation algorithm. I want to make this as clear as possible, so I will be updating the pseudocode more next week based on feedback.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

My progress is on schedule.

What deliverables do you hope to complete in the next week?

This next week, I hope to finish the final report, poster, and video. I also want to make some final updates to the application, like creating a better error message for when the user does not have enough clothing to create an outfit for their request, and fixing the outfit dropdown in the closet to have the correct current number of uses instead of always having 0.

Alanis’ Status Report for 12/7

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

This week, I finished the systematic testing of our classification models by testing how brightness, x/y translations, and rotations affect the accuracy of our classification models. I wrote a script to do this and also created some graphs.

I also evaluated which architecture out of ResNet50/101/152 would be best for the color and clothing type models by weighing the correct predictions vs close predictions. A close prediction for clothing type would be guessing jeans instead of trousers or a dress instead of a jumpsuit. A close prediction for color would be guessing beige instead of white or black instead of grey. The general description of the criteria I used to determine a close guess for clothing type would be the same category(tops, overwear, bottoms) and a similar shape/usage/weather (jeans and trousers can both be casual or business casual, dress and jumpsuit are both suitable for warm and neutral but not cold weather). The general description of the criteria I used to determine a close guess for color would be if the actual color could be predicted by lightening/darkening the predicted color(lighting would do this, and you could reach white by lightening beige which is already quite light, you could reach black by darkening grey, but you could not reach blue by lightening/darkening red). The exact criteria for close guesses will be outlined in our final report, however it is quite long so I will not include it here.

Also, this wasn’t done for the usage model since only the ResNet50 architecture was able to converge during training.

I also debugged our .tflite color model. We have 3 classification models (clothing type, color, and usage) that we converted to .tflite for inference on the Jetson. They are all ResNet50 architecture and trained in the same way (mostly the same data but with different labels). They all perform at an accuracy of 60-70% before conversion to .tflite. After running inference on the converted .tflite models on the Jetson, the clothing type and usage models perform with the same accuracy however the color model accuracy took a very large hit and mostly predicts blue, brown, and black.

We verified that the camera is not causing the issues since we are testing the models using pretaken images. I tested the problematic .tflite model on my computer and the accuracy is the same as the original. I verified this using 2 different Tensorflow versions, the one I used to train and the one running on the Jetson. I also wrote some code for Riley to try using the original model optimized for the Jetson using TensorRT, however this was very slow. I also sent Riley .tflite versions of the ResNet101/152 which also performed with low accuracy. We also tried training a .tflite model on the same Tensorflow version as the Jetson which still did not address the issue.

We decided to also pursue an alternative way of determining the color by cropping the clothing image around the largest object and determining the most prominent color in the cropped image. I wrote code to do this which is located here. We will continue to try and fix the .tflite file and will see which option works better before the demo.

I also worked on our poster.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

My progress is on schedule.

What deliverables do you hope to complete in the next week?

I hope to fix the color model and fine tune the color prediction from pixels code by adding the determination of the main color from the most prominent rgb.

Gabriella’s Status Report for 11/30

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I made many changes to the project this past week.

First, I set up the functionality for getting images from S3. I started by creating an AWS role with permissions for accessing the private S3 bucket with sensitive user data (images of clothing). I also installed the Boto3 module for S3. I used this module on the backend to fetch images from the S3 bucket, and create signed urls that I sent to the frontend. These signed urls allow us to display the clothing images on the frontend to users in a secure way. I tested this feature by uploading some test clothing images to the bucket and using different features of the application that show images.

I also fully reworked the outfit generation algorithm on the backend. I worked with Alanis to develop a more clear definition of what a “correctly” generated outfit would be based on different user requests. For example, for a casual request in a warm location, the new algorithm always generates a top/bottom or one piece (dress/jumper) outfit. It will never return a jacket or sweater. I also updated the algorithm to take user preferences into account 20% of the time.

I updated the select outfit feature on the backend and frontend to be compatible with dresses, jackets, and overwear (overwear in our application is defined as any sweaters, cardigans, blazers, or hoodies). I also got rid of the athletic usage type from the backend and frontend logic as it made more sense for the ML model to identify clothing as casual/formal instead of athletic, casual, or formal.

Alanis added the changes for the new closet inventory page on the frontend, and I added the backend functionality to paginate through the clothing items in the closet as well as edit clothing item classifications on the database.

I added a new location field to the database settings table, and made it possible for users to update their location from the frontend in settings. If the user inputs invalid locations or an invalid number for “Number of uses before dirty”, I added an error popup so users know that their change did not go through.

I also updated the backend to send the image url given by the Jetson Nano with the clothing item to the database table. Additionally, on the frontend, I updated the outfit generation page to display the clothing items in different formats based on how many items are in the outfit. This change was made for outfits to be more clear to the user.

This week I also ran into many issues with merging on git, and discovered that this entire semester I had been committing backend code under my name instead of my github accounts. This caused a majority of my commits, besides my newer ones to not show up in contributor insights. I now have this fixed for my future changes, but it is still inaccurate for my past ones, so for accurate insights into my backend code changes, please look through the individual commits under my name or github account instead of the contributor insights.

Overall, this week I finished all of my goals for the backend/frontend and have tested all of my features for basic functionality. I also set up a testing plan for more specific insights like intuitiveness of the application and the speed of outfit generation. I also added my testing information to the final presentation.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

My progress is on schedule.

What deliverables do you hope to complete in the next week?

This next week, I hope to finish up all of my testing and start working on the final report, poster, and video.

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

I had to learn how to use FastAPI to set up the backend. I learned this by reading through the documentation online and following a tutorial. I also learned how to use Flutter to develop the frontend, which I also learned by following a tutorial and reading documentation.

I followed both tutorials for FastAPI and Flutter during the beginning of the semester and documented my experience doing this in previous status reports. I felt that getting an initial exposure through the tutorials was very effective for me, as I was able to get a basic grasp of the frameworks. I also found it helpful to look up documentation later on as I had more specific goals and features I wanted to implement.

I learned how to create, manage, and query a MySQL database in a class I am taking this semester, 70455 Data Management Fundamentals. I feel that this class helped me get a thorough understanding of how to set up and use the database securely and effectively.

Riley’s Status Report for 11/30

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

These past two weeks, we got tensorflow working on the Jetson. We ran into all sorts of issues, catalogued, here, here, here and here. Eventually I think that issue was that tensorflow simply wasn’t supported on the version of Jetpack that we had since we kept on running into tricky compilation errors with protobuf, or with some other obscure module that we typically never see when the compilation functions. So, we cleared our SD card, flashed a supported version of Jetpack (5.12) and then re-downloaded Jetpack and all of the other modules/installations that we needed. And this worked!

We then had an issue where the models with the base tensorflow used too much memory to run and ran very slowly on the Jetson, and we kept running out of memory whenever we tried with all three models. To fix this Allie managed to convert the models to their tflite versions, which worked quickly and without exceeding our space allowance.

I also added functionality on the backend for the user to start the scanning process themselves, preventing the need to do command line input.

Another thing that occurred was that I created an s3 bucket and included functionality to send images to the bucket, which can be retrieved by the frontend to show to users.

Finally, since the presentation is coming up, I wrote the script for the presentation, helped with a few slides, enacted some of the testing we wanted to show in the demo, and practiced for the final presentation.

The testing that I conducted was for the classification and speed of the model on the Xavier NX.

Systematically tested the classification on the Jetson Xavier NX using the following process. Tested 80 images, with at least 5 items of each clothing type, 5 items of each color, 4 formal items, 7 casual items, with at least 2 out of the five of each clothing type having bad framing and/or bad lighting. Please let us know if this should be changed to make it a stronger approach.

The results from this testing was 58% for usage, 46% for clothing type, and 63% for usage, which we believe that this might be due to the fact that since these pictures are taken by users, it is a lot harder to get the image in frame and centered as it expects, and the lighting is also incongruous with the expectations of the model. Additionally, when we made the model lightweight, it lost a few percentage points due to the quantization needed to decrease the size of the model.

When testing for speed, we tested it with the same 100 images that we used for classification to have a fair and accurate comparison, and I added timestamps before and after each of the major parts (classification, upload to s3, send to database). Then we simply subtracted the time that it took for each part by subtracting the timestamp after and the timestamp before and that was the time taken for one section. And when we added these sections up, we got the total time. And the results were as follows,

Section Maximum Time Taken Average Time Taken
Classification 2.12 1.79
Upload to s3 2.62 0.50 
Upload to database 0.06 0.02 
Total 4.8 2.30

As we can see this part of testing went very well and we managed to get the speed way below our usage case requirements!!

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

I found it necessary to learn how to work with an embedded computer. Previously I wasn’t really involved with hobby computers and my experience amounted to using Arduino for a few minor school projects. Needless to say, setting up the environment on this was a huge learning experience, as only certain versions are supported and much care must be taken to ensure that installations don’t destroy dependencies. Navigating through those minefields and familiarizing myself with developing primarily in linux was new knowledge. Luckily, NVIDIA had good resources to learn about the supported systems and modules, which I used along with Google and help from my teammates. 

Additionally, I learnt the basics of how to use s3, including creating, sharing and sending to a bucket. This was also a cool experience as it’s something used in corporate and I never had a chance to learn it. Amazon has a bunch of really good resources to get started and then questions to google, ChatGPT, and partners were more than enough to fill the gaps. 

I also learnt more about SQL and fastAPI, which is quite immediately relevant since I will be taking database systems next semester. I had learnt some of it before but watched a few videos to refamiliarize myself. 

 

Finally, I learnt about how to use openCV and its module methods to interface with a single web camera. Without openCV, it would have been much more difficult to interface with the camera and would have required us to go very deep in the embedded weeds. But after just watching a few videos and reading a few articles, we were able to read from our camera in about a week or two.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

My progress is on schedule.

What deliverables do you hope to complete in the next week?

For next week, push button functionality needs to be added, this is important but in the worst-case scenario, it would be alright if users interacted with the keyboard since almost all Americans are familiar. Another thing that needs to be added is a casing/box attaching the camera with the light to make it more maneuverable and pleasing to the eye for the demo. Along with those items, touch-ups and the final report and demo are on the menu. 

One thing that we will also look at is some sort of alternative to classify color, since 46% accuracy is above chance but not ideal for us or for users. 

Alanis’ Status Report for 11/30

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

I fixed our usage model by compiling a new dataset. I manually labeled about 5000 images of clothing items like dresses, tops, jackets, etc. which could either be casual or formal. I then trained a ResNet50 model on this dataset and was able to achieve an 84% validation and a 65% “sanity check” accuracy. The code for the manual labelling is here, which basically displayed an image and allowed me to determine if it was casual/formal. The code for the usage training is here

I also worked in-person with Riley to debug the Jetson issues with Tensorflow. The issue ended up being an incorrect Jetpack version. I then wrote some code on the Jetson to run inference on the 3 classification models. This involved converting my .keras models to the “saved model” format, which cannot be trained further, and then to the TFLite format, which allows much faster inference on the Jetson.

I also added some frontend/backend features. I wrote a function which takes a city name as a string and returns if the weather is cold, neutral, or hot. This utilizes the OpenWeatherMap API. I also changed the frontend code to be able to display up to 4 pieces of clothing—top/dress/jumpsuit, bottoms, jacket, and overwear(hoodie, sweater, etc). This needed to be done due to a change in our outfit generation algorithm. I also added a frontend page which displays each piece of clothing in a users closet (6 per page with the option to scroll through pages). When a piece of clothing is clicked, it will show the user the current classification labels and allow them to be changed if necessary. I had to make some changes to the backend APIs as well for this feature. These features took a few days and various commits, but the linked commits are the ones with the finished features.

I also worked on our final presentation by creating the slides and writing information for Riley to turn into a script.

I also began the systemic testing for our classification by writing code to help determine how the accuracy changes when the brightness of the image is increased/decreased.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

My progress is on schedule.

What deliverables do you hope to complete in the next week?

I hope to fix the color model and finish the systemic testing for brightness (brightness increase by 1.5/2 and decrease by 1.5/2).

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

We recognize that there are quite a few different methods (i.e. learning strategies) for gaining new knowledge — one doesn’t always need to take a class, or read a textbook to learn something new. Informal methods, such as watching an online video or reading a forum post are quite appropriate learning strategies for the acquisition of new knowledge.

I had to learn how to use the Tensorflow framework to train a model. I used the online documentation for Tensorflow as the library is very well documented. I also had to learn how to use the pandas library to process datasets. I maily relied on this article from W3Schools. I also had to determine the best model architectures to use. I relied on this paper which discusses the performance of different architectures on the Jetson.

I also had to learn how to write backend APIs in the FastAPI format. I learned this from Gabriella. I also had to learn how to write Flutter code. I did this by googling “how to create _ in flutter” for each feature I wanted to make and following the forum posts or articles which came up. 

Whenever I hit roadblocks, I usually googled the error I was encountering and followed the advice of forum posts. For questions about the backend/frontend, I usually asked Gabriella first to see if she had encountered them.

Team Status Report for 11/30

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk at the moment is the classification accuracy of images from the camera. We are getting a much lower color classification accuracy from images taken on the camera than our validation and “sanity check” datasets. We haven’t pinpointed the exact reason for this since the clothing type and usage accuracies from the camera are similar to our validation and “sanity check” datasets. We are planning to retrain our color model on a larger dataset, which may require manually labeling more images. If this doesn’t address the accuracy, we may try a different approach which does not require a classification model by trying to determine the most common pixel values in the center of the image to determine the base color of the clothing.

Another issue may be our classification model since the accuracy on the camera is a bit lower than the validation and “sanity check” dataset accuracies for “long” clothing items, like dresses, lounge pants, and trousers. We thought this may have been due to a confusing background that had a doorway and a bright pillow which could have created issues. This confusing background was only used because Riley wasn’t able to find a large enough solid color background at his home over break. However, once we return back to Pittsburgh, we will have one to rerun the tests for “long” clothing items on.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes were made.

Provide an updated schedule if changes have occurred.

Our schedule has not changed.

This is also the place to put some photos of your progress or to brag about a component you got working.

Photos of our progress are located in our individual status reports.

Riley’s Status Report for 11/16

 What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

This week I fixed the camera bug that was preventing the openCV from displaying the images properly. Additionally I scaled the image to a level appropriate for the model and can continuously detect keyboard input to take pictures 4 seconds after the input was triggered. The issue with the openCV ended up being the fact that I needed to explicitly state what frame rate and resolution I was using, along with needing to add a delay for CV to display the image. 

Additionally, on Saturday I worked with Gabi to set up the Xavier NX with the database and API. We configured the hardware, finalized endpoints to send and store data and connected the server and the embedded computer. Currently I have gotten the device to send some dummy information to the backend server, which then can send a sql query to add the items to the database.

Below are images of my successful POST request via fastAPI.  The left image is showing the testing arguments I passed in successfully mirrored as a method to test the endpoint. The right image shows the fastAPI success, which is a little green OK at the bottom. If you see a couple of red failures above it, those issues were fixed 🙂

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

Progress is on schedule and nearly ready for the demo. Some more testing will be done between now and Wednesday but no new major features need to be added until the demo.

What deliverables do you hope to complete in the next week?

Next week I hope to set up a free trial with Amazon Educate and through that gain access to S3 file storage. With that we should have more capabilities in sending and storing images to display as a visual for the user. Additionally I hope to be able to detect the physical button input and use that as a trigger to take a picture. 

Verification Tests:

I tested the functionality of the camera by testing if the camera could display consistently between different boot ups and with the different frame rates and resolutions available. With the GUI the camera was able to consistently display and didn’t experience any issues in the testing. 

Additionally, I tested the cv code by running fifty different image capture requests and ensuring that each time, the event triggered as expected and displayed the image taken by openCV. This action worked 100% of the time. Further testing will be taken when the model is included in order to test if the results can be generated within 3 seconds for a sample dataset of fifty images taken by the camera. However since not all the models are ready and on the Jetson, this requirement will be tested in the coming weeks. 

Gabriella’s Status Report for 11/16

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I worked on a few different changes for the backend/frontend and began to prepare for testing.

I added API routes for the do laundry feature and select outfit feature. These link to functions in the backend that will update the database with information about how many times an item has been used, and what kind of outfits user’s like to wear.

I worked with Alanis to test the uses until dirty feature, and made a few changes together to complete the feature.

I also worked with Riley to help him set up the communication between the Jetson and database. I helped him plan out the overall framework of how it would work, with him sending a POST request from the Jetson, and writing a function in the backend to parse the Jetson information into data that can be added to the database. I also wrote out steps on how to use FastAPI to test the parsing, and I wrote the API route for the POST request along with pseudocode with the format needed to add the data to the database.

I linked the backend functionality to the frontend for updating user preferences and item usage.

I also wrote another function in the backend to add a new item to the closet_inventory table in the database, and added more clothing to the database and images folder for testing.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I changed the priorities of some tasks with the demo coming up, and prioritized helping to link the Jetson to the backend, along with adding more clothing items to the database over making the final updates to the outfit generation algorithm as the current one is not ideal but does work.

What deliverables do you hope to complete in the next week?

I want to update the outfit generation algorithm.

Verification Tests:

I tested the accuracy of the backend by going to all of the API endpoints and making sure they produced the expected results, based on different requests. I also tested query accuracy by ensuring the responses from the database reflected the user’s wardrobe accuracy accurately. I also tested error handling for invalid API requests, and requests to the backend for information in the database without database credentials.

For verification of the database I want to confirm data consistency and reliability. This includes making sure classified items are parsed and stored into the database correctly, images are referenced accurately through the proper file paths, and wear counts update as expected when laundry is performed or outfits are selected. I also want to do testing for the consistency of selecting outfits, so that the update for user preferences and usage both get committed together or neither get committed when there are errors.

Finally, for verification of the frontend I will ensure that the frontend successfully interacts with the backend and displays data accurately. This will include testing user actions like generating outfits, and toggling laundry. These features will also be tested for speed with a goal of successfully triggering within 2 seconds. Additionally, the frontend will be tested for robustness under different scenarios such as failed API calls and situations with invalid data being given or returned. This should lead to clear feedback to users instead of crashing.