Wonho’s Status Report for 4/16

This week I focused on creating a cropping script for the image as well as creating a shell automation script to allow the code to run automatically. For the cropping script, it takes in the info from the JSON output file produced by Open Pose and crops the image taken by the arducam based on the body key points labeled in the JSON file. The cropping allows the image to be focused on the upper body to make it easier for Color Thief to detect the upper torso color. Then it will automatically saved that image in an easily accessible location in Jetson Xavier.

For the shell script, it ties together the components for first part of our recommendation process. It automatically takes 20 pictures through the arducam and selects the most well lit (the 20th picture taken) picture and runs Open Pose only on that one. This reduces our run time since OpenPose takes a while. Then it will take that image and run it through the cropping script I wrote this week and it will save that image. Then the script takes that image and runs it through the Color Thief API that Yun organized and produces an output.txt file with the dominant color of the image.

Our individual chunks are pretty much done and we have started to work on integration of those parts today so if we work hard this week, our team should be able to finish on schedule. Next week I will work on connecting my part done this week with Ramzi’s recommendation algorithm.

cropped image example

Team Status Report for 4/16

The most significant risk/task we are facing is the integration between app and Jetson, and Jetson and mirror UI. We are expecting that both integration would require creating server-client socket connections, and there is a no easy way out. However, Jeremy and Yun have some experience in this area, so we are hoping to get this task done if we push through.

In terms of the system design, we initially planned to connect the app with mirror UI as well; however, we decided to remove that interaction and use Jetson as the intermediate for the communication as we need to build connections of Jetson-mirror UI and Jetson-app anyway.

We do not have any change in the schedule from the last week. We set a small goal last week to finish the modular tasks by this Thursday and work on integration from Saturday, and we are on track for this schedule.

To explain in a bit more detail of what we did this week, we worked on completing all remaining modular tasks. The remaining tasks were 1) automating torso and top detection, 2) user wardrobe input form from the app, 3) color + weather recommendation, and 4) mirror UI.

  • Wonho was able to write the automation script for Jetson which covers from spinning off Jetson to taking a screenshot using Arducam, running OpenPose, and to cropping the screenshot.
  • Jeremy finished making the user wardrobe input form that asks apparel type, length, season. After the user submit the form, the data gets saved into google spreadsheet, which will be the user wardrobe database.
  • Ramzi finished the color and weather based recommendation with Python. The color recommendation based on the survey he created on which colors go well with which, and the weather recommendation is based on the real-time feels-like temperature.
  • Yun finished creating the mirror UI using Electron. It has main page, recommendation loading page, recommendation result page, and thank you/feedback page.

More details can be found on each of our individual status report. The above four tasks were all of the modular tasks remaining, which means all we actually need to do is integrating these parts.

 

Ramzi’s Status Report for 4/16

This week I put together a recommendation system to recommend new outfits for the user depending on the color of what they are wearing, and the current weather conditions based in python. Using the webcolors python module, I was able to categorize the colors and come up with the best recommended colors to match based on a given rgb value.

Example of code and output of color/weather recommendations

I think we are catching back up to our planned schedule, and should be able to put together the modules we have each individually worked on in the near future.

Moving forward, I will be continuing to improve these algorithms, and integrating them together to create a system that can recommend an entire outfit for the user. I will also be assisting in integrating these algorithms into the rest of the project.

Yun’s Status Report for 4/16

This week, I worked on the mirror UI using Electron, HTML, and JavaScript. The mirror is supposed to interact with Jetson and the app. The diagram(Figure 1) below illustrates a general idea of how mirror UI would look like.

Figure 1: Mirror UI diagram

As illustrated in the diagram, the mirror UI(desktop app) is supposed to make transitions from one screen to another based on the app and Jetson request/responses. However, as we have not finished the integration of this part yet, I made the app to ack upon buttons such as “Get recommendation” and “Done”. This will later be updated along with the integration. Here is a screen recording of how the desktop app looks so far.

(raw link: https://drive.google.com/file/d/1_uZXitU8aZ5DF-yaDiu9o0ppms_jQQFj/view?usp=sharing)

In terms of the schedule, I think we are on schedule, as we planned to start the integration today after finishing the individual parts. Although modular parts need to be more sophisticated, they are all working. I think we can start integrating everything from today, which was what we planned.

Next week, I will work on connecting Jetson and the Mirror UI, so that the UI can act according to the user input.

 

 

Jeremy’s Status Report for 4/16

This week I mainly focused on finishing the app. As of now, my app has a home screen where the user is able to input certain tags to the piece of clothing which are then uploaded to a Google Spreadsheet that resides in our group’s capstone folder on Google Drive. I achieved this through using an API called sheet.best which allows GET and POST requests to a Google Spreadsheet using a specific uri. The only issue that I am facing is finding an effective way to upload the photo of the piece of clothing to the database. I’ve tried to send a POST request with the photo but it would only push the file destination of the photo in String format which is useless. I’ve also looked into possibly storing the photos on a cloud server (e.g. Google Cloud), but it requires a lot of OAuth hassle which I believe is unnecessary. It may be easier to find a way to upload the photo directly from the app to the Jetson if possible because then I could simply store the unique name/id of the photo file in the ‘photo’ column in the database and the Jetson can output the correct photo when trying to display it on the monitor screen. This is something that I will be working on over the weekend and by next week. I don’t want to spend too much time on this because I still need to integrate the app with the recommendation system and the Jetson, so I hope to get this done before next Wednesday. I would say that we are a little behind schedule because we only have around a week before our final reports are due, but I’m still positive about the progress that our group has been making towards getting everything integrated.

Tags
User inputs info
Database with inputted info

Wonho’s Status Report for 4/10

This week I was able to make a large break though with the Jetson and Camera module and setting up the torso detection with Open Pose. Initially, Jeremy and I had set up a Plan B which utilized trt-pose and a different camera (IMX 219) but it was not going smoothly so we had ordered a different camera that would hopefully work with the software and was going to test it. However, working at night on Monday, I was able to get Open Pose working with the Jetson and the original Arducam we had a found a way to extract the data from the output.

Open Pose is run on the individual screenshots we take of the person and I was able to return a output as a JSON file that where we would be able to identify the key body point in terms of coordinates. Open Pose is able to identify a total of 25 key points across the body and we just need to use 1-8 to detect the upper torso. What I plan on doing in the upcoming week is to use these key points to automatically crop the image we are using and connect it with the Color Thief algorithm to detect the main piece of clothing the user is wearing.

OpenPose body key points
Running OpenPose on my body
Output JSON file from OpenPose

Yun’s Status Report for 4/10

Last week I worked on categoritizing outfits and coming up with relevant tags. In order to do so, I did research through multiple popular clothing brands such as Zara and H&M that many students use and that sell various ranges of clothes. With the research, I came up with a few tags that can categorize clothings and tags that are relevant for outfit recommendations. The following Figure 1 demonstrates an abstract idea for tags and categorizations. Beside, I did more research on outfit recommendation algorithm, and I decided to hardcode weather based and also color based recommendation based on the color and weather coordination chart below (Figure 2, 3).

I think we are a week behind the very original schedule; however, because the rest are integration of modular tasks we have already accomplished and because we have done ramping-up, if we spend a few intensive days in the project, we will soon be on track.

This week, I will be working on wrapping up the recommendation systems and test it out to see if the result align with our expectations.

Ramzi’s Status Report for 4/20

I was ill for most of the week, and was unable to put aside much time to work on the project. As such, most of my time was spent ruminating how to put together the remaining portions of the project. Since we are a little behind, I came up with a few ideas on how to expedite and implement the database for the user’s wardrobe, and connect it to our torso and color detection programs, which I intend to discuss with my teammates in the upcoming week. Additionally, I have been preparing for the interim demo where I intend to present the frame that I designed, and put together with my teammates.

Team Status Report for 4/10

The main breakthrough our team was able to make this week was getting the OpenPose running up on our camera. This was one of the biggest problem we were facing as a team as a major part of our project relies on the Camera and Jetson being able to recognize the torso of the person. We had set to test our plan B of using trt-pose by wiping the SD card image for the Jetson and purchasing a new compatible camera for the software. After our lab on Monday, Wonho was working on it in the evening and was able to run OpenPose on the Jetson with the original camera. (yay!).

This was a major obstacle setting our schedule back and delaying the software integration process for the other components. However, in the meantime our teammates have made some significant progress on other components of the project. Jeremy has been working on the app and completed the feature of allowing the user to upload images (i.e. photo of their clothes) through the app. Yun has also been able to make some progress with the weather API as well as ColorThief. Unfortunately Ramzi has been sick this week as well as catching Covid so he has not been able to participate as much.

The interim demo on Wednesday went well as we were able to show off key features of our project that were working. Right now our project is in a state of a lot of working parts but not integrated yet so our focus for the last two weeks will be on integration and creating a frame for our parts to fit into. Our team already has a plan in place for how each part should integrate with another and we have taken steps to make sure the software will be compatible throughout.  Even though we are behind schedule, I would say we are technically “on schedule” since we should be able to catch up on the lost time by making more progress on the software components as well as the integration.

Jeremy’s Status Report for 4/10

This week I worked on finishing up the important features of our app. As of now, I am able to open the app through Expo Go, show the splash screen with our logo, and choose and upload photos which the server responds to. The server is on localhost:3000 and the user is also able to cancel choosing/uploading photos which the server will also respond with a “cancelled = true” k-v pair in the json object that is returned. I also helped demonstrate all this during the interim demo that happened on Wednesday.

For the coming week, as long as I stay on schedule, then I will have a working version of the app up and running with the ability to add tags and labels when uploading the photo which will also be sent to the server in a json object. I also want to be able to work with Yun to use her database that she has been setting up instead of localhost:3000 just responding with an OK. After that, I plan to integrate everything into the smart mirror and possibly add functionality to turning on/off the LED lights once we order them and get delivered.

Splash screen
Choosing a photo
Crop once selected
Image preview
Server response body