Category: Uncategorized

Riley’s Status Report for 11/30

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

These past two weeks, we got tensorflow working on the Jetson. We ran into all sorts of issues, catalogued, here, here, here and here. Eventually I think that issue was that tensorflow simply wasn’t supported on the version of Jetpack that we had since we kept on running into tricky compilation errors with protobuf, or with some other obscure module that we typically never see when the compilation functions. So, we cleared our SD card, flashed a supported version of Jetpack (5.12) and then re-downloaded Jetpack and all of the other modules/installations that we needed. And this worked!

We then had an issue where the models with the base tensorflow used too much memory to run and ran very slowly on the Jetson, and we kept running out of memory whenever we tried with all three models. To fix this Allie managed to convert the models to their tflite versions, which worked quickly and without exceeding our space allowance.

I also added functionality on the backend for the user to start the scanning process themselves, preventing the need to do command line input.

Another thing that occurred was that I created an s3 bucket and included functionality to send images to the bucket, which can be retrieved by the frontend to show to users.

Finally, since the presentation is coming up, I wrote the script for the presentation, helped with a few slides, enacted some of the testing we wanted to show in the demo, and practiced for the final presentation.

The testing that I conducted was for the classification and speed of the model on the Xavier NX.

Systematically tested the classification on the Jetson Xavier NX using the following process. Tested 80 images, with at least 5 items of each clothing type, 5 items of each color, 4 formal items, 7 casual items, with at least 2 out of the five of each clothing type having bad framing and/or bad lighting. Please let us know if this should be changed to make it a stronger approach.

The results from this testing was 58% for usage, 46% for clothing type, and 63% for usage, which we believe that this might be due to the fact that since these pictures are taken by users, it is a lot harder to get the image in frame and centered as it expects, and the lighting is also incongruous with the expectations of the model. Additionally, when we made the model lightweight, it lost a few percentage points due to the quantization needed to decrease the size of the model.

When testing for speed, we tested it with the same 100 images that we used for classification to have a fair and accurate comparison, and I added timestamps before and after each of the major parts (classification, upload to s3, send to database). Then we simply subtracted the time that it took for each part by subtracting the timestamp after and the timestamp before and that was the time taken for one section. And when we added these sections up, we got the total time. And the results were as follows,

Section Maximum Time Taken Average Time Taken
Classification 2.12 1.79
Upload to s3 2.62 0.50 
Upload to database 0.06 0.02 
Total 4.8 2.30

As we can see this part of testing went very well and we managed to get the speed way below our usage case requirements!!

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

I found it necessary to learn how to work with an embedded computer. Previously I wasn’t really involved with hobby computers and my experience amounted to using Arduino for a few minor school projects. Needless to say, setting up the environment on this was a huge learning experience, as only certain versions are supported and much care must be taken to ensure that installations don’t destroy dependencies. Navigating through those minefields and familiarizing myself with developing primarily in linux was new knowledge. Luckily, NVIDIA had good resources to learn about the supported systems and modules, which I used along with Google and help from my teammates. 

Additionally, I learnt the basics of how to use s3, including creating, sharing and sending to a bucket. This was also a cool experience as it’s something used in corporate and I never had a chance to learn it. Amazon has a bunch of really good resources to get started and then questions to google, ChatGPT, and partners were more than enough to fill the gaps. 

I also learnt more about SQL and fastAPI, which is quite immediately relevant since I will be taking database systems next semester. I had learnt some of it before but watched a few videos to refamiliarize myself. 

 

Finally, I learnt about how to use openCV and its module methods to interface with a single web camera. Without openCV, it would have been much more difficult to interface with the camera and would have required us to go very deep in the embedded weeds. But after just watching a few videos and reading a few articles, we were able to read from our camera in about a week or two.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

My progress is on schedule.

What deliverables do you hope to complete in the next week?

For next week, push button functionality needs to be added, this is important but in the worst-case scenario, it would be alright if users interacted with the keyboard since almost all Americans are familiar. Another thing that needs to be added is a casing/box attaching the camera with the light to make it more maneuverable and pleasing to the eye for the demo. Along with those items, touch-ups and the final report and demo are on the menu. 

One thing that we will also look at is some sort of alternative to classify color, since 46% accuracy is above chance but not ideal for us or for users. 

Team Status Report for 11/16

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk that could jeopardize the success of the project is the usage model accuracy. We were able to successfully train our clothing and color classification models, but our usage model is struggling to produce useful results, as discussed in our weekly meeting. This risk is being managed by our feature which allows users to correct and confirm the labels of each piece of classified clothing.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes were made.

Provide an updated schedule if changes have occurred.

Our schedule has not changed.

This is also the place to put some photos of your progress or to brag about a component you got working.

Photos of our progress are located in our individual status reports.

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

Validation is usually related to your overall project and is likely to be discussed in your team reports.

Our verification tests are discussed in our individual status reports.

Our validation tests will be ensured that our camera can take sufficiently clear images of our clothing for our classification models. Our models expect 224×224 images, and our camera can take images with a variety of resolutions, all sufficiently high enough to be scaled down to 224×224, so our camera will meet our requirements. Our camera also allows us to have higher resolution images to display to the user, since 224×224 images are grainy to the human eye.

Our validation tests will also verify that our classification models are compatible with the Jetson and that the Jetson is capable of predicting a class with each model in under 1 second. We have already validated that our .keras models are compatible with the Jetson by uploading them onto it. We will validate that our Jetson can run inference in under a second by using the camera to take at least 10 images of clothes from our own closet and feeding them through each of our three models. We will time how long it takes for each model to classify the image, and determine if the average inference time meets our 1 second threshold.

We will also validate that the Jetson can accurately send the classification results to the backend for storage in the database, and image folder. After the model classifies images of clothing, the Jetson will send the data as a POST request to the backend. We will validate that the backend will parse the data, add the clothing information to the database, and add the image file to the images folder. For our metrics, we will verify that the data in the database accurately matches the information sent from the Jetson. We will also confirm that the image is saved under the correct image file name, that matches the entry name for that item in the database. For speed, we will measure the time taken from sending the POST request to receiving confirmation from the backend. The goal is to meet a latency requirement of under 4 seconds for classification and storage.

Our validation tests will verify that the application accurately generates outfits based on the user’s request, and weather, along with user preferences and wardrobe data stored in the database. The outfits should meet our speed and accuracy use case requirements, and should be displayed on the frontend.

Gabriella’s Status Report for 10/19

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week, I mainly worked on the Design Report with my team.

Aside from this, I set up a github for the frontend, to keep our application separate from the backend. I also pushed and wrote code for the barebones frontend UI. Here is the demo image.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

I am on schedule.

What deliverables do you hope to complete in the next week?

I hope to begin connecting the frontend to the backend and to AWS S3.

Riley’s Status Report for 10/5

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours.

This week I managed to boot up and get situated with the Xavier NX. I received the SD cards late last week so this week I was able to go to a computer lab and set up the Xavier. Before that though, I needed to flash the SD cards with the correct image to enable the Xavier to boot up. The current version of Jetpack (the NVIDIA embedded development environment) currently does not support the Xavier NX, so I found a previous version that would. I also downloaded an even earlier version in case the above one didn’t work, but it was thankfully unneeded. After that I followed the instructions for windows on the Getting Started Guide and downloaded SD Card Formatter and Etcher, and used my roommates SD Card reader to flash the microSD card. That went well, and then I went to the computer lab to boot up the embedded system. This process also went smoothly and it booted up with minimal issues. Finally, I set the settings, and installed openCV using pip on the Xavier. 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the proiect schedule?

My progress is now on schedule, thankfully the installation process was smooth so even though some of the resources took longer to come in, it went well enough to catch up.

What deliverables do you hope to complete in the next week?

Hopefully the Arducam camera will come in soon and I can start connecting the output of the camera into the Xavier NX but if that doesn’t come in then I will start working on the openCV code to process the camera output and try to make the Xavier NX talk to the database.